ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 11:00 - 12:30 Room: N AUD5


It’s the Interviewers! New developments in interviewer effects research 2

Chair Dr Salima Douhou (City University of London, CCSS )
Coordinator 1Professor Gabriele Durrant (University of Southampton)
Coordinator 2Dr Olga Maslovskaya (University of Southampton)
Coordinator 3Dr Kathrin Thomas (City University of London, CCSS)
Coordinator 4Mr Joel Williams (TNS BMRB)

Session Details

To what extent do interviewers affect data collection and how can we better monitor and limit their impact?

Any deviation from the standardised protocol of the data collection process has the potential to induce bias to the data. Interviewer effects, defined as the distortions of survey responses in surveys with interviewer presence, may have a severe impact on data quality. These effects result from potential reactions to the social style and personality of interviewers, but also to their presentation of questions.

Analysis based on data that are biased by interviewer intervention and the conclusions drawn on the basis of this are likely to be incorrect. Hence, survey methodologists have improved the way in which interviewers are trained and briefed in order to limit the interviewers' influence. Yet, it remains open why even in surveys with exceptional efforts to train and monitor interviewers, interviewer effects occur.

Interviewers make (initial) contact with the prospective respondents and attempt to convince them to participate in the survey. The doorstep interaction between prospective respondents and interviewers is rarely documented, but an increasing number of studies indicates that some interviewers are more successful than others in convincing the prospective respondents to participate in a survey and to avoid non-response.

Once door-step interaction has been successful, interviewers may further affect the way in which respondents answer the survey questions on the questionnaire. Variation in survey responses may be due to the attitudes, interpersonal skills and personality of interviewers, but also relate to how the interviewers present particular questions and how strictly they follow the instructions. Any deviation from the standardised protocol provided by the core research team of the survey project decreases the comparability of the survey responses.

This session welcomes papers on new developments in the area of interviewer effects. Topics may include but are not restricted to:
• methodological developments in measuring and modelling interviewer effects,
• interviewer effects on measurement error,
• interviewer effects on nonresponse rates and nonresponse bias,
• interviewer influences on response latencies (timings),
• influence of personality traits, behaviour, attitudes, experience, and other characteristics of interviewers on survey estimates,
• implications for interviewer recruitment and training strategies,
• monitoring and evaluation of fieldwork efforts by interviewers,
• collection of GPS data or audio-visual material of door-step interactions.

Papers that discuss these issues from a comparative perspective are also welcome. We invite academic and non-academic researchers and survey practitioners to contribute to our session.

Paper Details

1. Interpersonal Inferences and Interviewer Effects in Face-to-Face Surveys
Mr Simon Kühne (Socio-Economic Panel Study (SOEP))

It is well known that the presence of an interviewer can affect responses and thereby introduce variance and bias into survey estimates. For instance, some respondents tend to adjust their true answers towards social norms or specific characteristics of the interviewer in order to appear in a good light. When investigating these types of interviewer effects, survey research has mainly focused on interviewer socio-demographics and only a few studies have examined effects of not directly observable characteristics such as interviewer personality, attitudes and beliefs. Moreover, survey research lacks of insights on how interviewers' and respondents' interpersonal perceptions of each other affect respondent answers to related questions.

For this project, self-reports of 1,184 respondents and 114 interviewers as well as their mutual perceptions of each other were collected. Data collection was realized in the context of the 2015 wave of the Socio-Economic Panel Study Innovation Sample (SOEP-IS), a large-scale longitudinal face-to-face survey of households in Germany. Both respondents and interviewers were presented with the same questions covering a variety of political and social issues such as political party identification as well as attitudes towards abortion and legalization of drugs.

This presentation includes results on a) the effects of interviewers' own opinions on respondent answers, b) the nature and accuracy of interpersonal inferences, as well as c) their impact on respondents' self-reports. First results show that respondents and interviewers are able to infer each other's opinions and attitudes quite accurately. Moreover, the results reveal a strong association between a respondent's answer and a respondent's inference about his/her interviewer's opinion. This indicates that some respondents seem to adjust their true answers towards anticipated interviewer opinions.


2. Assessing Interviewer Effects on Immigration Attitudes
Ms Sonila Dardha (City, University of London)

Interviewer error is known as one of the sources of variability in face-to-face surveys. Kish (1962) reports that variance in respondents’ answers can be partially explained by interviewer clustering, with an intra-class correlation varying between 0.05 and 0.10. Interviewers behave differently in the interviewing process, e.g. when probing, and they create a particular atmosphere which influences the thought and answer process of respondents (Mangione et. al., 1992). Distortion in data collection due to interviewer error is composed of two types of effects: bias, “when there is a dominant and systematic effect of interviewers”, and variance, “when these systematic differences effects differ between interviewers” (Loosveldt, 2008, p. 215).

The objective of this study is to understand the extent to which interviewer error affects immigration attitudes. It will be done on ESS data by multilevel analysis: respondents (level 1) are clustered within interviewers (level 2). Another two levels could be added – regions (level 3) and countries (level 4) – to account for the variability that both these geographical, cultural and social levels could explain. Since the interviewer effect is higher for attitudinal than for factual questions (Schnell & Kreuter, 2005), the role of interviewer effect can be assessed on the rotating module D - immigration, including attitudes, perceptions and policy preferences – of ESS round 7 data from 2014. Certain sensitive attitudinal questions such as those asking whether immigrants generally take jobs away from workers in a country may be prone to social desirability bias; simply the presence of interviewers may cause a systematic error in respondents’ answers.

The main task is to isolate confounding indicators and potential sources of variability, zooming in on interviewer effect. Respondents’ socio-demographic profile, locality/rurality, area/region and country are some of the effects that could explain attitudes and perceptions towards immigrants and immigration. The focus will be in separating or disentangling these from interviewer effect and showing the magnitude of such error on various immigration attitudinal outcomes. In addition, the role of interviewer indicators (variables of level 2), such as age and gender, will be assessed further.

Understanding interviewer error across face-to-face surveys not only enriches the state of art but also helps in designing strategies to reducing such error.


3. Why Do Interviewer Gender and Religious Dress Interactively Affect Support for Gender Equality in the Middle East?
Dr Lindsay Benstead (Portland State University)

How and why do observable interviewer traits, including interviewer gender and religious dress, affect survey responses and item non-response in the Middle East and North Africa? One potential cause of widely divergent survey findings on gender-sensitive issues in the Arab Barometer and other cross national surveys may be measurement or non-response bias stemming from observable interviewer traits. Using three nationally representatives surveys spanning the initial three-year post Ben Ali period in Tunisia—1,202 Tunisians in 2012, 1,220 in 2014, and 3,600 in 2015—this paper assesses the link between interviewer gender and responses to questions about women’s rights in the public and private spheres. Female interviewer gender affects the likelihood of favoring gender equality, but the nature and size of the effects depends on interviewer religious dress, as well as the type of rights. Secular-appearing female interviewers receive the most progressive responses for all questions, while religious-appearing males receive the most traditional. However, when asking about private issues such as Shari’a law in the family code, secular-appearing males receive more progressive responses than religious-appearing female interviewers, while for public issues such as women in parliament, the opposite is the true. The data offer strong support for social distance and ingroup loyalty across all respondent types and power relations theory for male respondents in conversations with female interviewers. Implications for reducing survey error and understanding gender relations in the Middle East and North Africa are considered.


4. Toward a Better Understanding of the Effect of Interviewer’s Attitudes on Reporting Sensitive Religious Information
Professor Zeina Mneimneh (University of Michigan)
Ms Julie de Jong (University of Michigan)
Professor Mansoor Moaddel (University of Maryland)

Research has shown that interviewers can have important effects on respondent answers (Blom and Korbmacher, 2013; Davis, Couper, Janz, Caldwell, and Resnicow, 2010; Groves, 1989; Groves et al., 2009). Potential bias introduced by interviewer religious wardrobe on related survey items is of particular concern in countries facing religious and political upheavals such as those in the Middle East and North Africa region. For example, studies in this region have found that interviewers wearing Islamic (rather than secular) symbols and Islamic hijab (vs. no hijab) received increased reporting of religious attitudes either directly or through an interaction with respondent characteristics (Blaydes & Gillum, 2013; Benstead, 2014; Koker, 2009; Mneimneh et al., 2015). However, little is known about the effect of interviewer’s own attitudes. We have recently shown that an interviewer’s own religious attitudes affected respondent’s reported religious attitudes independent of interviewer religious wardrobe. The effect of an interviewer’s attitudes was as large as, and sometimes larger than, the effect of the interviewer’s religious wardrobe (Mneimneh et al., 2015). The literature, however, is lacking on explanations of the mechanism of these effects. Are interviewers mirroring the attitudes of the respondents they are interviewing or are they projecting their own attitudes on the respondents? Are the effects transmitted through potential side conversations about religious topics between the respondent and the interviewer?

Using recently available panel survey data from a second wave of data collected in Tunisia in 2015, this paper investigates these research questions by looking at interviewer’s attitudinal measures collected before the field work and contrasting their effects with interviewer measures collected after the field work. Moreover, observational measures on side conversations related to religious and political topics were collected, allowing for investigation of the potential mediating or moderating effects on the relationship between interviewer’s and respondent’s attitudes.