ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 16:00 - 17:30 Room: N AUD5


It’s the Interviewers! New developments in interviewer effects research 4

Chair Dr Salima Douhou (City University of London, CCSS )
Coordinator 1Professor Gabriele Durrant (University of Southampton)
Coordinator 2Dr Olga Maslovskaya (University of Southampton)
Coordinator 3Dr Kathrin Thomas (City University of London, CCSS)
Coordinator 4Mr Joel Williams (TNS BMRB)

Session Details

To what extent do interviewers affect data collection and how can we better monitor and limit their impact?

Any deviation from the standardised protocol of the data collection process has the potential to induce bias to the data. Interviewer effects, defined as the distortions of survey responses in surveys with interviewer presence, may have a severe impact on data quality. These effects result from potential reactions to the social style and personality of interviewers, but also to their presentation of questions.

Analysis based on data that are biased by interviewer intervention and the conclusions drawn on the basis of this are likely to be incorrect. Hence, survey methodologists have improved the way in which interviewers are trained and briefed in order to limit the interviewers' influence. Yet, it remains open why even in surveys with exceptional efforts to train and monitor interviewers, interviewer effects occur.

Interviewers make (initial) contact with the prospective respondents and attempt to convince them to participate in the survey. The doorstep interaction between prospective respondents and interviewers is rarely documented, but an increasing number of studies indicates that some interviewers are more successful than others in convincing the prospective respondents to participate in a survey and to avoid non-response.

Once door-step interaction has been successful, interviewers may further affect the way in which respondents answer the survey questions on the questionnaire. Variation in survey responses may be due to the attitudes, interpersonal skills and personality of interviewers, but also relate to how the interviewers present particular questions and how strictly they follow the instructions. Any deviation from the standardised protocol provided by the core research team of the survey project decreases the comparability of the survey responses.

This session welcomes papers on new developments in the area of interviewer effects. Topics may include but are not restricted to:
• methodological developments in measuring and modelling interviewer effects,
• interviewer effects on measurement error,
• interviewer effects on nonresponse rates and nonresponse bias,
• interviewer influences on response latencies (timings),
• influence of personality traits, behaviour, attitudes, experience, and other characteristics of interviewers on survey estimates,
• implications for interviewer recruitment and training strategies,
• monitoring and evaluation of fieldwork efforts by interviewers,
• collection of GPS data or audio-visual material of door-step interactions.

Papers that discuss these issues from a comparative perspective are also welcome. We invite academic and non-academic researchers and survey practitioners to contribute to our session.

Paper Details

1. Explaining Interviewer Effects on Unit Nonresponse: A Cross-Survey Analysis
Professor Annelies Blom (School of Social Sciences, University of Mannheim)
Ms Daniela Ackermann-Piek (German Internet Panel, SFB 884, University of Mannheim)
Dr Julie Korbmacher ( Survey of Health, Ageing and Retirement in Europe, Munich Center for the Economics of Aging)
Mr Ulrich Krieger (German Internet Panel, SFB 884, University of Mannheim)

Interviewers have many different tasks administering a survey and are thus crucial in the data collection process. However, they are – intentionally or unintentionally – a potential source of survey errors. Today, a large body of literature has accumulated measuring interviewer effects on unit nonresponse. Nevertheless, there are fewer studies explaining interviewer effects found, even though explaining interviewer effects, developing methods to reduce them, and finding ways to adjust for them in analyses would seemingly benefit survey practitioners and analysts alike.

Recently, West and Blom (2016) have published a research synthesis on factors explaining interviewer effects on various sources of survey error, including unit nonresponse. They find that the literature reports great variability across studies in the significance and even direction of predictors of interviewer effects on unit nonresponse. This variability in findings across studies may be due to a lack of consistency in key characteristics of the surveys examined, such as the group of interviewers employed, the survey organizations managing the interviewers, the sampling frame used, and the populations and time periods observed. In addition, the explanatory variables available to the researchers examining interviewer effects on nonresponse differ greatly across studies and may thus influence the results.
This diversity in findings, survey characteristics, and explanatory variables available for analyses call for a more orchestrated effort in explaining interviewer effects on unit nonresponse. Our paper fills this gap, as our analyses are based on four German surveys administered through the same survey organization and the same pool of interviewers, and use the same area control variables and identical explanatory variables at the interviewer level.

Despite the numerous similarities across the four surveys, our results show a high variability of intervierwer characteristics explaining interviewer effects on unit nonresponse.

Literature
West, B. T., & Blom, A. G. (2016). Explaining interviewer effects: A research synthesis. Journal of Survey Statistics and Methodology, First published online: November 1, 2016, 1-37. doi: doi: 10.1093/jssam/smw024


2. Are Interviewer Effects on Interview Pace Related to Interviewer Effects on Straight-Lining tendency in the European Social Survey? Interviewer related analysis of interview pace and straight-lining tendency
Dr Caroline Vandenplas (KULeuven)
Dr Koen Beullens (KULeuven)
Dr Katrijn Denies (KULeuven)
Professor Geert Loosveldt (KU Leuven)

Many, if not all, face-to-face surveys are subject to interviewer effects on a range of outcomes. Previous research shows that interview length and speed suffer to a large extent from interviewer effects. However, straight-lining tendency and other satisficing symptoms have also been shown to be subject to interviewer effects, albeit to a smaller extent. Moreover, one can expect that interview speed and satisficing/straight-lining tendency are related: Higher speed can lead to an increase in cognitive difficulty, which might be dealt with through straight-lining, whereas straight-lining can decrease response latency and hence increase the interview speed. In this paper, we first repeat previous analyses of interviewer effects on interview speed and straight-lining tendency for the seventh round of the European Social Survey. The results confirm previous outcomes: A large intra-interviewer correlation coefficient for interview speed, and a somewhat smaller intra-interviewer correlation coefficient for straight-lining. We then study the correlation between interview speed and straight-lining tendency, without determining causality, and decompose this correlation in an interviewer and respondent-level correlation. Results show that the positive interviewer-level correlation between interview speed and straight-lining tendency surpasses the respondent-level correlation. This indicates that ‘fast’ interviewers are the ones carrying out interviews during which more straight-lining occurs.


3. Modelling of Interviewer Experience
Mr Felix Benjamin Grobe (Leibniz Institute for Educational Trajectories)

This presentation deals with the comparison of different kinds of modelling of interviewer experience in computer-assisted personal interviews.

Interviewer experience is one of the most studied influences on unit nonresponse, especially on the response rate. Reviewing the literature of previous studies about the impact of interviewer experience on unit nonresponse, it can be recognized that researchers use different kinds of modelling. Regarding to the literature three levels of interviewer experience can be distinguished: It can be arranged into a macro level (e.g. experience as interviewer over the lifetime), a meso level (e.g. experience over multiple waves of a longitudinal survey) and a micro level (e.g. experience over a survey’s field period). Thus, it is hardly possible to compare existing results and it is not clear if different kinds of modelling influence the results. Accordingly it is necessary to analyse the modelling of interviewer experience.

For this analyse the information about the interviewers from the National Educational Panel Study (NEPS) are used, especially the information of the starting cohort one (SC1). The aim is to find out, if there are differences in the results regarding the explained variance on unit nonresponse due to the following kinds of modelling.

Macro level:
• years of experience in a specific survey organisation
Meso level:
• number of interviewer deployment within a specific starting cohort (NEPS; SC1)
Micro level:
• number of conducted interviews within a specific wave (NEPS; SC1; wave 3)


4. Interviewer Effects in Factorial Survey Experiments
Ms Sandra Walzenbach (University of Konstanz)
Professor Katrin Auspurg (University of Munich)
Professor Thomas Hinz (University of Konstanz)

Does interviewer presence affect answers in factorial survey experiments? Do they enhance data quality or foster social desirability bias if sensitive dimensions are included? Which role do interviewer characteristics play?

Over the past decades, factorial survey experiments have become an increasingly popular tool in many subfields of the social sciences. Part of their popularity clearly stems from the possibility to model decision scenarios more realistically than single-item questions: By including various dimensions at once, vignettes can account for the fact that real-life decisions typically require a simultaneous consideration of several factors. An independent but joint experimental variation of the factors allows the researcher to quantify their impact on the requested evaluations and to estimate trade-offs and interactions between the different factors.

Factorial survey experiments require visual representation and are therefore suitable for implementations in completely self-administered survey modes (CASI and PAPI) as well as in self-administered modules within face-to-face interviews. For practitioners who have to choose between these modes of data collection, it is crucial to know if the additional costs for personal interviews pay off in terms of better data quality, that is, less item-nonresponse, more consistent answers and less heuristic response behaviour. Nonetheless and somewhat surprisingly, interviewer effects in factorial survey experiments have to the best of our knowledge not received any attention in survey research so far.

We will make a first step to fill this void by presenting results from a vignette module on the fairness of earnings that has been completed in two different survey modes, namely a mode with interviewer presence vs. a completely self-administered mode. For both survey modes, random samples of the German residential population were used.

We are mainly interested in the effects of interviewer presence on item nonresponse, inconsistency of responses and measurement errors such as response sets, but also in the question to what extent interviewer presence affects substantive results gained by the factorial survey experiment. Although it has been argued that factorial surveys might be an appropriate method to reduce social desirability bias, it is unclear if this still holds when an interviewer is present. Therefore we focus on sensitive dimensions in the vignette texts and compare their effects across survey modes.

Additionally, we collected data on the interviewer characteristics which enable us to examine if for instance the interviewer’s sex has an influence on the respondent’s evaluation of the corresponding vignette dimension.