ESRA logo

ESRA 2019 glance program


Detecting, Explaining and Managing Interviewer Effects in Surveys 3

Session Organisers Dr Daniela Ackermann-Piek (GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany)
Mr Brad Edwards (Westat)
Dr Jette Schröder (GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany)
TimeThursday 18th July, 14:00 - 15:30
Room D16

How much influence do interviewers have on different aspects of the survey process and how can we better reduce their negative impact on the data quality as well as enhance their positive impact?

Although interviewer effects have been studied over several generations, still, interviewer effects are of high interest on interviewer-administered surveys. Interviewers are involved in nearly all aspects of the data collection process, including the production of sampling frames, acquisition of contact and cooperation with sample units, administration of the survey instrument, and editing and transition of data. Thus, interviewers can cause errors and prevent errors in nearly all aspects of a survey.

However, the detection of interviewer effects is only a first step. Thus, it is of interest to understand why interviewer effects occur. Although there are various studies explaining interviewer effects using multiple sources of data (e.g., paradata, interviewer characteristics, response times, etc.), the results are inconclusive. In addition, it is essential to prevent negative interviewer effects before they occur to ensure that interviewer-administered surveys can produce high-quality data. There are multiple ways to intervene: interviewer training, monitoring during fieldwork, adaptive fieldwork design or switching the survey mode, etc. However, still, relatively little is known about how all these different methods can effectively reduce interviewer error because there is a lack of experimental studies.

We invite researchers to submit papers dealing with aspects of detecting, explaining and preventing interviewer effects in surveys. We are especially interested in quasi-experimental studies on the detection, explanation, and prevention of interviewer error in surveys, and on the development or encouragement of interviewer ability to repair or avert errors. We welcome researchers and practitioners from all disciplines across academic, governmental, private and voluntary sectors to contribute to our session.

Keywords: Interviewer effects, Interviewer training, Interviewer characteristics, Paradata, Total Survey Error

Interviewer Effects on Well-Being and Health Questions

Mr Dimitri Prandner (Johannes Kepler University of Linz, Austria) - Presenting Author
Professor Johann Bacher (Johannes Kepler University of Linz, Austria)

To be healthy and to feel well are a social desirable status and thus have been part of methodological discussions since the 1980ties (e.g.: Nederhof 1985; Davis et al. 2010, Krumpal 2013). Hence, it can be expected that the response to survey questions concerning one’s health and well-being are indeed vunerable to be effected by a bias. Furthermore Davis et al. (2010) provided an insightful overview und reported on interviewers’ effects that accompany these questions, when asked in a survey. Following up on the last aspect our arguments are based on the idea that Face to Face interviews are social situations where interactions are adjusted based on the specific social setting (e.g. age and gender combination of interviewer and interviewee) and thus we will test for the effects interviewers have on responses towards questions on health and wellbeing.

Using multi-level data from the fourth wave of the Social Survey Austria, which deployed 80 interviewers to survey 2021 individuals all over Austria via Face to Face interviews in the summer of 2016, we will discuss the following questions:

•Does the sex of the interviewer influence response behavior?
•Does age of the interviewer influence response behavior?
•Are experienced interviewers able to weaken possible effects?

We expect, that male respondents give more social desirable answers if they are interviewed by a younger female (Hypothesis 1) and that experienced interviewers can moderate this effect (Hypothesis 2).

References:
Davis, R. F., Couper, M.P., Janz, N.K., Caldwell, C.H.m Resincow, K. (2010). Interviewer effects in public health surveys. HEALTH EDUCATION RESEARCH, 25(1), 14–26.
Krumpal, I. (2013). Determinants of social desirability bias in sensitive surveys: a literature review. Quality & Quantity, 47(4), 2025-2047.
Nederhof, A. J. (1985). Methods of coping with social desirability bias: A review. European journal of social psychology, 15(3), 263-280.


Interviewer Effects on Responses to Sensitive Questions: Evidence from Demographic and Health Surveys in Four African Countries

Dr Sarah Staveteig (The Demographic and Health Surveys Program, Avenir Health) - Presenting Author

Sensitive survey questions tend to be disproportionately subject to nonresponse and measurement error. Nationally-representative Demographic and Health Surveys, which have been conducted in 93 countries, typically ask women several hundred questions during face-to-face interviews, including a handful about sensitive topics such as sexual behavior. To date, there has been little opportunity to systematically study how the characteristics of interviewers and their interaction with respondent characteristics affect nonresponse and inconsistency of responses to these sensitive questions. Drawing from newly-gathered data about interviewer characteristics from recent Demographic and Health Surveys surveys in four sub-Saharan African countries—Burundi, Malawi, Uganda, and Zimbabwe—I assess the effect of individual interviewers as well as of interviewer characteristics and social distance between interviewers and respondents on inconsistency and differential reporting of these behaviors and experiences.

Outright refusals to sensitive questions were low enough that systematic patterns could not be detected; instead the data show a handful of interviewers in each country experienced unusually high levels of inconsistency. In examining other types of sensitive questions, I found evidence of interviewer effects in adjusted models, including evidence of age effects. For example, net of individual and other interviewer characteristics, respondents interviewed by interviewers at least ten years older than them had a significantly lower tendency to report premarital sex; marital status effects differed by country. These results may be suggestive of the role of culture and social norms. A random effects model of individual interviewers and respondent characteristics found clear evidence of other interviewer effects on response bias. I discuss how findings could be used to improve data quality in future household surveys.


Did You Like the Interview? Interviewer Effects on Respondents’ Subjective Assessment of the Interview

Dr Jette Schröder (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Claudia Schmiedeberg (LMU Munich)

A large body of research documents interviewer effects on survey data. Interviewers affect unit nonresponse and measurement as, for instance, more experienced interviewers apply more efficient strategies in gaining cooperation or perform their interviews faster. But the underlying mechanisms of interviewer effects are still unclear and not all types of survey outcomes are covered in the literature on explaining interviewer effects yet.
We investigate interviewer effects on respondents’ subjective assessment of the interview measured by the survey question “How did you like the interview?” Thus, we do not measure an element of survey error such as unit nonresponse or measurement error but focus on the interview process as an underlying aspect. How respondents feel during the interview is an important, though understudied question. For instance, if respondents enjoy the interview they may be more motivated to answer all questions and may be less likely to refuse participation in subsequent waves.
We use data from the German Family Panel pairfam and apply multilevel models to account for the nested data structure. In addition, we draw on data from an interviewer survey conducted between pairfam waves 8 and 9 to explain any interviewer effects found. We don’t focus on interviewers’ demographic characteristics only, but also on interviewer attitudes and behaviors such as whether the interviewer enjoys meeting panel respondents each year or is able to establish a friendly rapport with panel respondents over time. Our results indicate that considerable interviewer effects on respondents’ subjective assessment of the interview exist even after controlling for respondent and survey characteristics. Interviewer characteristics, attitudes, and behaviors can explain part of the interviewer effect.


Interviewer-Related Variation in Open-Ended Questions: Matter of Style, or Indicator of Data Quality?

Dr Alice Barth (Bonn University) - Presenting Author
Dr Andreas Schmitz (Bonn University)

In large-scale surveys, it is of paramount interest for researchers to assess the amount of interviewer-induced variation. Data quality differences on the interviewer level raise suspicion that interviews were poorly conducted or even (partly) faked, may compromise substantial analyses and impact on conclusions. Whereas a variety of sophisticated screening techniques for standardized survey data have been proposed, responses to open-ended questions are seldom part of data quality assessments in this context. This, however, is a suboptimal practice for two reasons: (a) unstructured responses contain relevant substantive insights as they can partially compensate the restricted format of closed questions and (b) vice versa, unstructured information can itself entail information of fraudulent interviewer practices, thus making it a widely neglected source for systematic quality control.
In this presentation, interviewer-related variation in length, variation, and content of responses to open-ended questions in the German General Social Survey (ALLBUS) is assessed and related to quality indicators for standardized questions. In 2008 and 2016, ALLBUS respondents were asked about their associations with the terms “left” and “right” and “foreigners”, respectively, in an open format during the face-to-face interview. These questions are subject to considerable interviewer effects – for example, regarding the length of responses, level two variation ranges between 0.20 and 0.43 (intra-interviewer correlations).
Do these effects merely represent differential access to differing sub-populations of respondents, differing reporting styles of interviewers, or do they actually correspond to differences in data quality in other parts of the survey? In order to answer these questions, the distribution of multiple indicators of data quality in standardized and unstandardized questions (missing values, response variance in item batteries, etc.) is examined at the interviewer level. The relational consideration of interviewer-related measurement variation in open- and closed-ended questions allows for a comprehensive assessment of interviewer conduct.


Interviewer Variation in Third Party Presence during Face-to-Face Interviews

Professor Zeina Mneimneh (University of Michigan)
Ms Julie de Jong (University of Michigan) - Presenting Author
Ms Jennifer Kelley (University of Essex)

The presence of a third person in face-to-face interviews constitutes an important contextual factor that affects the interviewee's responses to culturally sensitive questions. Interviewers play an essential role in requesting, achieving, and reporting on the private setting of the interview. Our recent work has shown that the rate of interview privacy varies significantly across interviewers; while some interviewers report high rates of privacy among their interviews, others report low rates of privacy for the interviews they administered. Yet, there is a lack of understanding of what explains such interviewer variation in interview privacy. Do certain interviewer characteristics such as experience, sociodemographics, and attitudes towards privacy explain such variations? What about the measurement quality of the privacy observation measures interviewers collect? Is it possible that section-specific measures show less interviewer variation than end-of-the-interview measures because of potential differential recall across interviewers?

This paper explores these research questions for the first time using data from a national mental health survey conducted in the Kingdom of Saudi Arabia, where a total of 4000 face-to-face interviews were completed. Interviewers were required to record their observations regarding the presence of a third person at the end of several questionnaire sections throughout the interview, in addition to recording this information about the overall presence of a third person at the conclusion of the interview. We use these two types of observations and measure the contribution of interviewer variation to these estimates. We then compare predictors of interview privacy for each of the two types of observations using a series of multilevel models focusing on the effect of interviewer-level characteristics (while controlling for respondent and household level characteristics). Findings from this paper will have important practical implications related to training interviewers on requesting, maintaining, and reporting information on the private setting of the interview.