ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     




Friday 17th July, 09:00 - 10:30 Room: HT-105


Estimating effects of modes and mixed modes designs 2

Convenor Mr Alexandru Cernat (Institute for Social and Economic Research, University of Essex )

Session Details

Traditional approaches to data collection in the social sciences (i.e., face to face and telephone surveys) are becoming more expensive. At the same time cheaper approaches, such as web surveys, lack traditional sampling frames. This has led to a surge in data collection designs that aim to combine the strength of each mode into a single survey. In this context, accumulating evidence that informs design decision in mixed modes surveys is essential.

This session will contribute to this debate by tackling some important topics such as:
- Is the effect of social desirability moderated by mode?
- How do self-administered strategies (e.g., paper and web) differ in data quality?
- Are traditional scales (like those measuring personality, depression, cognitive ability) equivalent across modes?
- How does selection/non-response bias differ across modes?
- Does the use of mixed mode data impact substantive results?
- How does research on mixed mode integrate in the Total Survey Error framework?
- How to prevent mode effects through design?

Paper Details

1. Mode Effects in Personality Measurement - An Experimental Investigation of the Interviewer's Influence
Ms Luisa Hilgert (German Socio-Economic Panel Study (SOEP))
Professor Martin Kroh (German Socio-Economic Panel Study (SOEP))
Dr David Richter (German Socio-Economic Panel Study (SOEP))

This study measures the effects of different modes of data collection on the measurement of the Big Five model of personality. Participants of the German Socio-Economic Panel (SOEP) were randomly assigned to two different modes of data collection - computer-assisted personal interviews (CAPI) and computer-assisted self-interviews (CASI). Participants then answered the BFI-S. The study inspects measurement equivalence and interviewer effects as well as possible differences in factor means, response styles and nonresponse across the two modes.


2. Volunteering, Survey Mode, and Consent
Dr Nikki Graf (University of Mannheim)

Research suggests that volunteering and prosocial behaviors are linked to survey participation, with implications for nonresponse and estimates of these behaviors. We examine how reported volunteering varies across survey mode and consent to share contact details with a survey agency. We use data from a 2014 survey in Germany, with telephone and web survey components and opt-in vs. opt-out consent question wording. Examining differences among samples in their volunteering activities provides insight into consent questions, nonresponse, and possible bias. Results hold potential importance for mixed mode surveys, informed consent options, and assessing estimates of civic and prosocial behaviors.


3. Mode effects on measures of wellbeing: a comparison between uni- and multi-dimensional approaches
Miss Rosa Sanchez Tome (NCCR LIVES- University of Lausanne)
Ms Caroline Roberts (University of Lausanne)
Ms Michèle Ernst Stahli (FORS - University of Lausanne)

It is difficult to define and measure wellbeing, especially at a time in which surveys have trouble obtaining high quality data. In order to overcome difficulties, there is certain agreement over the fact there are valid single wellbeing measures and over the need to use different modes of data collection. In spite of this, those survey design choices can have negative consequences. Comparing estimates by type of mode we find measurement differences in the questions that are used by single measure approach studies and investigate whether mode effects in single-item measures make any difference to the substantive conclusions.


4. Understanding mode effects in reading assessment – the impact of item and person covariates
Mrs Sarah Buerger (German Institute of International Educational Research (DIPF))
Mr Ulf Kroehne (German Institute of International Educational Research (DIPF))
Mr Frank Goldhammer (German Institute of International Educational Research (DIPF), Centre for International Student Assessment (ZIB))

For the transition from paper-based assessment (PBA) to computer-based assessment (CBA) mode effect studies are highly relevant to clarify the comparability of both test forms and to become aware of the consequences of incomparability. In the National Educational Panel Study (NEPS) additional experimental studies were conducted to found out more about differences between administration modes. Multiple-group models were used to investigate mode effects with respect to construct equivalence and item parameters, related to measurement properties such as the complexity of response format. Individual differences in the mode effect were analyzed using person covariates such as ICT-literacy.