ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Detecting, Explaining and Managing Interviewer Effects in Surveys 1

Session Organisers Dr Daniela Ackermann-Piek (GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany)
Mr Brad Edwards (Westat)
Dr Jette Schröder (GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany)
TimeTuesday 16th July, 16:00 - 17:00
Room D16

How much influence do interviewers have on different aspects of the survey process and how can we better reduce their negative impact on the data quality as well as enhance their positive impact?

Although interviewer effects have been studied over several generations, still, interviewer effects are of high interest on interviewer-administered surveys. Interviewers are involved in nearly all aspects of the data collection process, including the production of sampling frames, acquisition of contact and cooperation with sample units, administration of the survey instrument, and editing and transition of data. Thus, interviewers can cause errors and prevent errors in nearly all aspects of a survey.

However, the detection of interviewer effects is only a first step. Thus, it is of interest to understand why interviewer effects occur. Although there are various studies explaining interviewer effects using multiple sources of data (e.g., paradata, interviewer characteristics, response times, etc.), the results are inconclusive. In addition, it is essential to prevent negative interviewer effects before they occur to ensure that interviewer-administered surveys can produce high-quality data. There are multiple ways to intervene: interviewer training, monitoring during fieldwork, adaptive fieldwork design or switching the survey mode, etc. However, still, relatively little is known about how all these different methods can effectively reduce interviewer error because there is a lack of experimental studies.

We invite researchers to submit papers dealing with aspects of detecting, explaining and preventing interviewer effects in surveys. We are especially interested in quasi-experimental studies on the detection, explanation, and prevention of interviewer error in surveys, and on the development or encouragement of interviewer ability to repair or avert errors. We welcome researchers and practitioners from all disciplines across academic, governmental, private and voluntary sectors to contribute to our session.

Keywords: Interviewer effects, Interviewer training, Interviewer characteristics, Paradata, Total Survey Error

Profiles of Interviewers’ Strategies in Face-to-Face Surveys

Mr Alexandre Pollien (FORS, Swiss Centre for Expertise in Social Sciences, University of Lausanne) - Presenting Author
Dr Jean-Marie Le Goff (Life Course and Inequalities Research Center, University of Lausanne & NCCR LIVEs, Lausanne.)

This paper focuses on the profiles of face-to-face interviewers in the work aimed at obtaining cooperation of the target persons. It examines their activity in two international surveys conducted every two years in Switzerland since 2010: ESS and Mosaich (ISSP). For both of them the standards procedure requires that after 5 unsuccessful contact attempts or a refusal, the conversion phase takes place along with another interviewer. We looked at these first contacts before conversion phase. For each interviewer-survey, we computed vectors of probabilities to have for each of the five attempts a cooperation, a refusal, no contact or another issue. A cluster analysis of these vectors was applied disclosing three types of profiles. These profiles are revealed through different arrangements of likely non-contacts, refusals or responses during the contact process. Our hypothesis is that interviewers’ strategies hold a temporal dimension. A first cluster comprise slower interviewers, scrupulously complying with instructions. We can also notice that they get more non-contacts. A second cluster includes interviewers with a shorter and more chaotic sequence of contacts attempts. Interviewers of this second cluster are facing more refusals in the first attempt. A third cluster includes interviewers succeeding faster to get in touch with the respondents. Interviewers of this third cluster are more seasoned and get the higher response rate. An interviewer study shows that this third cluster is characterized by interviewers with a more realistic attitude combined with a relational orientation leading to tailor their approach. The second cluster combines realism with procedural attitude leading to an offhand behavior. The first cluster combines procedural attitude and relational orientation leading to fastidious strategy of contact.


Modelling Group-Specific Interviewer Effects on Nonresponse using Separate Coding for Random Slopes in Multilevel Models

Ms Jessica Herzing (University of Lausanne) - Presenting Author
Mrs Annelies Blom (University of Mannheim)
Mr Bart Meuleman (University of Leuven)

While there is ample evidence of interviewers affecting nonresponse and some evidence regarding the factors explaining overall interviewer effects, the literature is sparse on how interviewers differentially affect specific groups of respondents despite of the importance of this in terms of nonresponse bias. A reason for the sparse literature on interviewer effects on nonresponse bias may be limitations of standard use of multilevel models. We demonstrate how an alternative parametrization of the random components in multilevel models, so-called separate coding, delivers insights into differential interviewer effects on specific respondent groups. A multilevel model with separate coding of random coefficients makes it not only possible to estimate how the size of interviewer effects varies across different types of respondents, but also offers possibilities to investigate how interviewer characteristics affect the groups differentially.
Using the example of nonresponse during the recruitment of a probability-based online panel separately for persons with and without prior internet access (data used from the German Internet Panel), we detect that the size of the interviewer effect differs between the two respondent groups. While we discover no interviewer effects on nonresponse for persons without internet access (offliners), we find sizable interviewer effects for persons with internet access (onliners). In addition, we identify interviewer characteristics that explain this group-specific nonresponse. Our results demonstrate that the implementation of interviewer-related fieldwork strategies might help to increase response rates among onliners, as for onliners the interviewer effect size was relatively large compared to the interviewer effect size for offliners. Clearly, surveys with large imbalances among respondent groups gain from an investigation of the variation of interviewer effects. By considering group-specific interviewer effect size one is more effective when implementing or adjusting interviewer-related response-enhancement strategies and thus, one might mitigate nonresponse bias more effectively.