ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Friday 21st July, 09:00 - 10:30 Room: F2 108

Mixing modes and mode effects

Chair Professor Caroline Bayart (University Lyon 1 )
Coordinator 1Professor Patrick Bonnel (ENTPE - University Lyon 2)

Session Details

Survey response rates are decreasing over the world. Even if weighting procedures allow to reduce the incidence of non-response, it is always necessary to postulate that people with some socio-demographic characteristics who do not respond to a survey have the same behaviour than people with the same socio-demographic characteristics who respond. But evidence seems to indicate that it is not always the case and survey non-response might produce bias. Efforts are made to increase response rate for traditional survey by improving the questionnaire, reducing respondent burden, increasing reminders… Even if results are generally positive, it is in most cases not sufficient.
A way to balance the impact of non-response and produce more reliable results, is to propose a second (or more) media and let people chose the apppropriate mode and moment to answer. The potential of new and interactive media (web, smartphones…) seems to be high to collect data. But these solutions also generates some bias. First, in terms of design and administration of the questionnaire, which could vary according to the mode. Then, the generalization of the results to the whole population sometimes remains an issue (penetration rate, technical feasibility…). Lastly, the question of data comparability remains. When mixed survey modes are used, individuals choose to belong to one group or another or only respond if the proposed medium suits them. The responses are therefore not completely comparable, because the sample is no longer random and the presence of respondents is determined by external factors, which may also affect the variable of interest in the studied model. The danger when databases are merged is that a sample selection bias will be created and compromise the accuracy of explanatory models.
The aims of the session will be to discuss the potential of new technologies for mixed modes framework, to characterize bias generated by mixed modes surveys and to give some perspectives for reduce these bias.

Paper Details

1. A method controlling and correcting for measurement mode-effect to aggregate samples in mix-mode surveys
Mr Stéphane Legleye (INSEE-DMCSI)
Mr Gaël de Peretti (INSEE-DMCSI)
Mr Tiaray Razafindranovona (INSEE-DMCSI)

Using data collection modes A and B in a survey leads to aggregation problems. Most literature on the topic focusses on the estimation of the measurement effect for a variable y due to the difference in data collection modes. Doing this implies controlling for selection bias i.e differences in sociodemographics plus potential additional variables (matrix X) between samples interviewed in A and B: classically, this is done using reweighting (calibration or inverse propensity weighting) or matching techniques.
Afterwhile, one has to choose to correct or not depending on the reliability of modes A and B for the measurement of y, on the purpose of the mix-mode survey (either improving coverage or measurement), and on potential comparibility constraints with previous surveys. But weighting procedures cannot correct for measurement effect as it is a measurement problem: individuals do not have the same behaviour in A and B.
We propose a three-step method that agregates A and B and controls for measurement effect when mode A is the reference. First, a calibration or inverse propensity weighting based on X helps identifying the set of variables Y with measurement effects that need correction. Second, we compute a logistic model of answering in B instead of A in the pooled sample {A,B}, controlling for X and Y and potential interactions of variables in X and Y. Second, we match (potentially 1:n) the observations in A and B on the propensity score. Provided some adjustements in the model, the matched observations of B and A are balanced in Y and X and potential interactions, as demonstrated by Rosenbaum and Rubin in 1983. Third, we impute the values of Y in the unmatched subsample of B with the complementary observations of {A, B} using a multiple imputation technique. Related variance can be estimated classically.
Our method treats simultaneously the problem of measurement bias and of differential associations between sociodemographics and variables of interest between modes, provided that the interactions have been included in the propensity modelling. It is more parsimonious than imputing Y in all the sample interviewed via B that has been proposed by Tuba Suzer Gurtekin in its PhD (2013:;view=1up;seq=162). As a consequence, the variance due to imputation is limited. The method can be applied to all mixmode design provided that: sampling weights are available; that a modelling of answer in B and A is possible, and that A and B are of a sufficient size.
Some important limits are noteworthy: the method does not provide the true measurement but takes mode A as the reference mode for Y; the proportion of matched observations may be small if the propensity model is complex and if the selection bias is important between samples.
Our method is illustrated in a random survey on crime and victimisation held in 2013 in France.

2. Measuring cognition in a multi-mode context: Comparability and challenges in administering complex measures on the web
Ms Colleen McClain (University of Michigan)
Dr Mary Beth Ofstedal (University of Michigan)
Dr Mick P. Couper (University of Michigan)

As large-scale longitudinal surveys that have traditionally been administered via telephone or face-to-face modes increasingly move toward including a web option, challenges arise in adapting to self-administration. Striking a balance between taking advantage of the opportunities of the self-administered, computer-based mode and maximizing comparability with the interviewer-administered modes for the concurrent and past waves presents operational and substantive challenges for survey researchers concerned about measurement error.

In particular, the measurement of cognition via a series of tests—common in longitudinal surveys of the general population, as well as specific studies of aging—raises questions about design, feasibility, and respondent burden. In some cases, the tests that have formed the backbone of interviewer-administered research designs are difficult or impossible to administer in a web setting, raising questions about how to design a battery that minimizes measurement error and respondent difficulty while maximizing comparability and response quality. Furthermore, and of particular relevance for studies of aging across the world, these issues may be exacerbated for older respondents who may be unfamiliar with technology or have cognitive impairments that could affect the quality and completeness of the data differentially across modes. Despite the challenges involved in working with a mixed-mode study of this nature and the movement of many longitudinal studies to web administration, few mode comparisons on the topic of cognition exist in the current literature. Thus, the implications of mixed-mode design decisions still remain largely unclear.

To address this gap, we present a discussion of issues involved in designing a web-based administration of the cognition measures within the Health and Retirement Study (HRS), which plans to offer a web option in upcoming waves. We present results from an analysis of mode differences between self-administered and interviewer-administered cognitive tests, focusing on the diverse set of cognition measures that were included in the 2013 HRS Internet Survey, an off-year survey of a subset of core wave respondents. Versions of most of these tests, including word recognition, verbal analogies, number series, Serial 7s, and others, are also included in the HRS biennial core interview which, to date, has been interviewer-administered. We assess the differences in conclusions drawn using these measures versus those in an adjacent interviewer-administered core wave, and restrict our analysis to respondents who completed the measures in both modes, allowing us to examine differences at the within-respondent level and draw stronger conclusions about the potential implications of mixing modes. Specifically, we examine associations between the cognition measures and known predictors and outcomes, as well as correlations among the cognition measures themselves, across modes; assess differences in data quality using paradata (in particular response latencies) from both modes; and identify particular tests for which mixed-mode designs may be more or less problematic. Our paper provides one of the first comprehensive mixed-mode analyses of this important topic and suggests future directions and opportunities for measuring cognition while minimizing bias in a changing survey environment.

3. Adult Education Survey in the mixed mode design
Ms Eva Belak (Statistical Office of the Republic of Slovenia)
Mrs Marta Arnež (Statistical Office of the Republic of Slovenia)
Professor Vasja Vehovar (Faculty of Social Sciences)

The Statistical Office of the Republic of Slovenia implemented the Adult Education Survey in autumn of 2016 (n=8,504). The survey was last conducted in 2011 in sequential mixed mode design, where telephone interviewing (CATI) was followed by face-to-face interviewing (CAPI), but due to coverage and financial reasons, the main survey was conducted by web followed by CATI and CAPI.

For the 2016 survey the feasibility of web survey mode was tested. In April 2016 the pilot web survey was conducted (n=2,075). The main goal of the pilot was to assess the quality of the reported informal and formal activities for precise coding; other goals were to analyse in detail the obtained paradata (interviewing time depending on how may activities were reported, drop-off rate, etc.) and to test the overall response rate and the response rate by different age groups and other socio-demographic characteristics of the selected persons. After the pilot web survey, the re-interview survey was conducted in CATI mode with 500 units. The goal of the re-interview survey was to assess the quality of the web questionnaire.

The results of the web pilot will be compared to the results of the main web survey results. The main added value of this paper is to illustrate the problems related to the introduction of web survey data collection into mainstream survey data collection: in what circumstances and when could (or should) this be done? What justifications and preliminary studies are required for this? What are the risks involved?

4. Mixed-modes transport survey: a French case study
Dr Caroline Bayart (University Lyon 1)
Professor Patrick Bonnel (ENTPE - University Lyon 2)

The growing difficulty of obtaining data from representative surveys of the population, and the growing complexity of the data required for increasingly sophisticated models, now generally make it impossible to gather all the data in a single survey or according to a single methodology.
Research on the survey methods needs to work on mixed-mode survey protocols. To that end, the Transport Economics and Planning Laboratory initiated an online survey involving people who did not respond to the household travel survey, conducted by phone in the Rhône-Alpes region (France’s second biggest region, with a population of 6 million), between 2012 and 2015. People who refused to answer the standard survey or who were unavailable were contacted in a second phase to complete the online questionnaire.
When mixed survey modes are used, individuals choose to belong to one group or another or only respond if the proposed medium suits them. The responses are therefore not completely comparable, because the sample is no longer random and the presence of respondents is determined by external factors, which may also affect the variable of interest in the studied model. It is highly likely that the socioeconomic characteristics and the travel behaviors of the individuals who respond using the Internet are different from those of the individuals who respond to a phone interview. The danger when databases are merged is that a sample selection bias will be created that will compromise the accuracy of explanatory models of travel behaviors.
The paper initially discusses web potential for households travel surveys, especially in a mixed modes framework. Some thoughts on Rhône-Alpes on-line questionnaire and the choices operated compared to its phone version are provided. Then, we carry out a comparative analysis of mobility behaviors, between households who answer on-line and those subjected to the standard CATI questionnaire. Data show that Internet respondents declare fewer trips than respondents to phone survey.
The aim of the paper is to characterize a selection bias, which could be corrected by using the two-stage estimation method. The first stage consists of estimating the survey medium “choice” equation using a Probit model. The second stage consists of explaining the differences in travel behavior using a specific model. We identify a selection bias, and showed that data collection mode (web or face-to-face) had a direct impact on mobility. The number of trips fell by 0.6 when subjects answered the questionnaire on line. We will develop further analyses considering travel time and travel distance budgets. First results indicate much smaller impact on those indicators than number of trips. This results is interesting, if we consider the controversial Zahavi's hypothesis. Is it really a selection bias or web media allow to survey individual with partially different behavior than phone media ?
Finally, we give some perspectives for future households travel surveys as it seems important to consider recent evolutions of the society and new needs to improve the methodology of the households travel survey.