ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 09:00 - 10:30 Room: N AUD5


It’s the Interviewers! New developments in interviewer effects research 5

Chair Dr Salima Douhou (City University of London, CCSS )
Coordinator 1Professor Gabriele Durrant (University of Southampton)
Coordinator 2Dr Olga Maslovskaya (University of Southampton)
Coordinator 3Dr Kathrin Thomas (City University of London, CCSS)
Coordinator 4Mr Joel Williams (TNS BMRB)

Session Details

To what extent do interviewers affect data collection and how can we better monitor and limit their impact?

Any deviation from the standardised protocol of the data collection process has the potential to induce bias to the data. Interviewer effects, defined as the distortions of survey responses in surveys with interviewer presence, may have a severe impact on data quality. These effects result from potential reactions to the social style and personality of interviewers, but also to their presentation of questions.

Analysis based on data that are biased by interviewer intervention and the conclusions drawn on the basis of this are likely to be incorrect. Hence, survey methodologists have improved the way in which interviewers are trained and briefed in order to limit the interviewers' influence. Yet, it remains open why even in surveys with exceptional efforts to train and monitor interviewers, interviewer effects occur.

Interviewers make (initial) contact with the prospective respondents and attempt to convince them to participate in the survey. The doorstep interaction between prospective respondents and interviewers is rarely documented, but an increasing number of studies indicates that some interviewers are more successful than others in convincing the prospective respondents to participate in a survey and to avoid non-response.

Once door-step interaction has been successful, interviewers may further affect the way in which respondents answer the survey questions on the questionnaire. Variation in survey responses may be due to the attitudes, interpersonal skills and personality of interviewers, but also relate to how the interviewers present particular questions and how strictly they follow the instructions. Any deviation from the standardised protocol provided by the core research team of the survey project decreases the comparability of the survey responses.

This session welcomes papers on new developments in the area of interviewer effects. Topics may include but are not restricted to:
• methodological developments in measuring and modelling interviewer effects,
• interviewer effects on measurement error,
• interviewer effects on nonresponse rates and nonresponse bias,
• interviewer influences on response latencies (timings),
• influence of personality traits, behaviour, attitudes, experience, and other characteristics of interviewers on survey estimates,
• implications for interviewer recruitment and training strategies,
• monitoring and evaluation of fieldwork efforts by interviewers,
• collection of GPS data or audio-visual material of door-step interactions.

Papers that discuss these issues from a comparative perspective are also welcome. We invite academic and non-academic researchers and survey practitioners to contribute to our session.

Paper Details

1. Interviewer effects on response latencies in a face-to-face interview survey
Dr Olga Maslovskaya (University of Southampton)
Professor Gabriele Durrant (University of Southampton)
Professor Patrick Sturgis (University of Southampton)

Analysis of paradata can greatly help to improve survey processes. One form of paradata is the length of time it takes a respondent to answer a survey or particular survey questions. It is referred to as response latencies or response times in the literature. Interview length is often used as a key indicator of both response quality and fieldwork costs and a better understanding of interview length is, therefore, important. One key influence on interviewer length in interviewer administered surveys is the interviewer. It is well known that interviewers play a key role in determining the quality and cost of data collection in household interview surveys. They influence the likelihood that sample members will respond and, therefore, the bias and precision of estimates. They also affect the answers that respondents give in ways that can make them more, or less accurate. These effects have been robustly demonstrated in the existing literature. In this paper we consider a different and less well studied outcome as we are interested in the extent to which interviewers affect the time respondents take to answer individual questions. We apply a cross-classified mixed-effects location scale model to response times in the UK Household Longitudinal Study (Understanding Society Wave 3) to explore the interviewer effects while controlling for respondents’, areas’ and questions’ levels’ characteristics. The model extends the standard two-way cross-classified random-intercept model by specifying the residual variance to be a function of covariates and additional interviewer, area and question random effects. This extension enables us to study interviewers’ effects on not just the ‘location’ (mean) of response latencies, but also on their ‘scale’ (variability). Furthermore, by linking the response latency paradata to data on interviewers from an independent survey of interviewers who collected data in Wave 3 of Understanding Society, we are also able to model these interviewer effects as a function of their demographic, work experience, attitudinal, and personality characteristics.
The paper is proposed as part of a UK-based research project, the NCRM Research Workpackage 1 on ‘Data Collection for Data Quality’, which is funded by the UK Economic and Social Research Council (ESRC) and led by a team from the University of Southampton. The project investigates among other topics interviewer effects on response timings (or response latencies). (Project website: http://datacollection.ncrm.ac.uk/ .)


2. Time is Money, or Is It? Using Module Lengths to Evaluate Interviewer Effects
Dr Kathrin Thomas (City, University of London)
Dr Salima Douhou (City, University of London)
Miss Virginia Ros (City, University of London)

The Total Survey Error (TSE) framework suggests that sampling, but also non-sampling error, e.g., interviewer effects, are relevant components for evaluating the quality of survey data. Among other things, previous research has shown variation in interview timing, especially the timing of particular batteries of questions or topical module, display variation. We may anticipate different mechanisms may trigger varying module timing. These could be related to the individual respondents’ characteristics, contextual difference (e.g., translations if cross-national research is concerned), but also to the survey interviewers, for instance, when they deviate from the standardized protocol because respondents ask for clarification or are reluctant to answer a question. In this study we explore variation in module lengths looking at batteries of questions on different sensitive issues and the extent to which this influences the data quality in the European Social Survey (ESS). We identify batteries/modules of sensitive topics using post-hoc coding of all questions on the core ESS questionnaire following Krumpal’s definition (2013) for distinguishing intrusiveness, risk of disclosure and norm violation. We then evaluate module length across interviewers to detect significant within-interviewer variation controlling for different interviewer characteristics. The results of this study will help us to better understand whether module length can be used as an indicator to study bias induced by interviewers. Future (qualitative) research will have to further disentangle the reasons for different module lengths: Do some interviewers indeed probe more frequently or try to ‘help out’, while others feel time is money and want to complete the interview as quickly as possible? Our study may also contribute to developing more specific instructions and training material for survey interviewers of the ESS in order to reduce interviewer-related error.


3. The Impact of Interviewer Effects on Regression Coefficients
Mr Micha Fischer (University of Michigan)
Professor Brady T. West (University of Michigan)
Professor Michael R. Elliott (University of Michigan)
Professor Frauke Kreuter (University of Maryland)


This presentation will examine the influence of interviewers on the estimation of regression coefficients from survey data. First, we will present theoretical considerations with a focus on measurement errors and nonresponse errors due to interviewers. Next, we will present the results of a simulation study identifying which of several nonresponse and measurement error scenarios has the biggest impact on the estimate of a slope parameter from a simple linear regression model. We find that when response propensity depends on the dependent variable in a linear regression model, bias in the estimated slope parameter is introduced, but we also find no evidence that interviewer effects on the response propensity have a large impact on the estimated regression parameters independent of the missing data mechanism. The simulation study also suggests that standard measurement error adjustments using the reliability ratio (the ratio of the measurement-error-free variance to the observed variance with measurement error) can correct most of the bias introduced by interviewer effects in a variety of complex settings, indicating that more routine adjustment for such effects should be considered in regression analysis using survey data. Finally, we examine the primary sources of interviewer effects on regression coefficients estimated using real survey data collected in Germany, in a study where the nonresponse errors and measurement errors introduced by interviewers for individual variables could be computed using administrative data. We find that our proposed adjustment works well to correct most of the bias introduced in this setting.


4. Explaining interviewer effects: an alternative approach
Professor Geert Loosveldt (KU Leuven)
Dr Koen Beullens (KU Leuven)
Dr Caroline Vandenplas (KU Leuven)

To assess the impact of interviewer effects on substantive variables in face to face surveys, it is common practice to calculate intra class correlation coefficients (ICC). Results of such analyses with data from the European Social Survey show interviewer effects on both point estimates of variables as well as relationships between variables. As a consequence, substantive results for some countries may be affected by these interviewer effects. More insight in the factors that contribute to these effects is advisable and necessary. One strategy to explain interviewer effects is to use interviewer characteristics (for example: interviewer experiences, interviewer workload, ..) to model the between interviewer variance. In contrast, the evaluation of the impact of respondent characteristics (and characteristics of the interview situation) is less obvious. Usually, respondent characteristics are specified in the models to control for differences between interviewers in the composition of the respondent groups. Then interviewer effects are evaluated after respondent characteristics explained part of the variance in the dependent variable. This means that respondent characteristics are used to explain the variance in the substantive dependent variable and that interviewer effects express the variability between interviewers after controlling for these respondent characteristics. Such models do not assess the effect of respondent characteristics on interviewer effects. In fact, the relationship between the respondent characteristics and the interviewer effects is not specified in the model. However it is reasonable to assume that some respondents are more sensitive to interviewer effects and that in some respondent groups the ICCs are higher. To find out whether respondent characteristics may influence the extent of interviewer effects we can specify a model with ICCs as dependent variable and respondent characteristics as independent variables. This implies a change in the unit of analysis: from measurements of the substantive variables into measurements of ICCs. The former is at the respondent level, the latter is at the intra class correlation level . This change allows to investigate the relationship between interviewer effects and respondent characteristics as well as characteristics of the interview situation The specification of ICCs as dependent variable in the analysis is the key element of the alternative approach to explain interviewer effects. In the paper, we will elaborate and illustrate this approach. We will examine whether interviewer effects in the European Social Survey are related to two respondent characteristic: age and educational level and one interview characteristic: speed of interviewing.


5. Interviewer-respondent interactions in CAPI and CATI: Rapport through laughter?
Dr Yfke Ongena (University of Groningen)

Various studies have shown that social desirability bias and satisficing are more prevalent in CATI than in CAPI surveys. Although this difference has theoretically been explained in terms of rapport (Holbrook et al 2003), it has not systematically been studied whether interviewer-respondent interactions in CATI and CAPI surveys indeed show a difference in rapport. Rapport is a concept that is difficult to define, and may be related to various types of behaviors in interviewer-respondent interaction. One specific behavior that may be related to rapport is laughter. We analyzed 60 CATI and 54 CAPI interviews that originated from a mixed-mode experiment using the European Social Survey questionnaire (Haan 2015). Analysis was based on a coding scheme developed by Garbarski, Dykema and Schaeffer (2016), who define rapport in terms of responsiveness by interviewers and engagement by respondents. We found mixed differences with respect to behaviors related to rapport. For example, interviewer laughter appeared to be more common in CATI than in CAPI, but apologetic utterances such as ‘sorry’ occurred equally often in both modes. Furthermore, a significant difference was found in the number of words uttered. Question-answer sequences contained on average two more words in CATI than in CAPI. This effect is partly explained by the fact that for many questions in the CAPI survey show cards were used, and in those cases extension of interaction in CATI interviews is due to less efficient communication about response alternatives. Further analysis of extended interactions showed that respondents in CATI had more difficulty in formulating their response and had more difficulties with question wording than in CAPI. These task-related issues may contribute to the effect of decreased trust and motivation of respondents in CATI interviews, and may subsequently explain the increased level of satisficing and social desirability bias in this survey mode compared to CAPI.