ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Friday 19th July 2013, 11:00 - 12:30, Room: No. 14

Explaining Interviewer Effects in Interviewer-Mediated Surveys 2

Convenor Professor Annelies Blom (University of Mannheim)
Coordinator 1Ms Julie Korbmacher (Munich Center for the Economics of Aging (MEA), Max Planck Institute for Social Law and Social Policy)

Session Details

Researchers are invited to submit proposals for papers at the session "Explaining Interviewer Effects in Interviewer-Mediated Surveys" at the European Survey Research Association conference, July, 15-19, 2013 in Ljubljana. In interviewer-mediated surveys interviewers naturally have great potential to affect data quality. Interviewers compile sampling frames; they make contact and gain cooperation with the sample unit; and they act as mediators between the researcher's questions and the respondent's answers. Their characteristics, attitudes, experience and abilities can affect all stages of the data collection process and interviewer effects may occur. As such they are invaluable and a source of error at the same time.

Therefore, the selection of good interviewers and appropriate training are essential for high-quality surveys. However, still little is known about what constitutes a good interviewer and good training. Understanding the mechanism of interviewer effects requires the availability of information about the interviewers. There are three potential sources of interviewer information: First, the actual interview data and information contained therein about interviewer clustering. Second, paradata automatically collected during the data collection process. This may include information about how the data were collected (e.g. call record data), as well as information on the interview itself (e.g. response times or audio trails). Third, a survey administered to the deployed interviewers may collect data about relevant interviewer characteristics. Such a survey may cover experiences, attitudes, expectations, and general demographics.

This session will focus on research into explaining interviewer effects on various aspects of a survey using one or more sources of information about the interviewer.


Paper Details

1. Interviewer Characteristics and Response Behavior: Do Characteristics of Interviewers Affect Responses on Questions that are Sensitive with Respect to Data Privacy?

Dr Frank Reichert (National Educational Panel Study, University of Bamberg)

Interviewer characteristics can affect willingness to respond and responses itself. Past research took into account questions on income as well as other sensitive issue-areas (e.g., health). The National Educational Panel Study (NEPS) as well conducts data about interviewers, namely gender, age, education, interviewer experience, and joy at communication (measured via fatigue of the interviewer). At the same time the NEPS conducts data that are sensitive with regard to data privacy according to the German federal data protection act (BDSG): It must be assumed that for instance information about ethnicity, political opinion, religious beliefs, union membership, and health are of particular confidentiality. Since laws should build upon day-to-day needs and manifest standards of everyday world, this could imply that data which are sensitive with respect to the BDSG are disclosed more reserved than other details. Such information could thereby be more susceptible to interviewer effects. Using CATI and CAPI surveys and data from different age cohorts, respectively, it is thus considered (1) whether the mentioned interviewer characteristics affect willingness to respond to sensitive questions, and (2) whether the actual collected responses are independent of interviewer characteristics. Considering the survey mode (CATI vs. CAPI), bivariate and multiple analyses are presented. Furthermore, interaction effects of interviewer characteristics and respondent characteristics will also be examined. The findings are discussed in terms of reliability and validity of the data collected in the NEPS, and with respect to optimizing the composition of interview staff regarding specific survey contents and respondent groups.


2. Interviewer and Incentive Effects in Recruitment Interviews for a Probability-based Online Panel

Ms Ines Schaurer (GESIS - Leibniz-Institute for the Social Sciences )
Ms Bella Struminskaya (GESIS - Leibniz-Institute for the Social Sciences )
Mr Lars Kaczmirek (GESIS - Leibniz-Institute for the Social Sciences )
Mr Wolfgang Bandilla (GESIS - Leibniz-Institute for the Social Sciences )

Variation across interviewers in response, contact, and cooperation rates is well documented. Likewise, the impact of incentives is tested in different modes of data collection. Less is known about how different interviewers cope with different amounts of incentives.
We conducted an incentive experiment within telephone recruitment interviews for a probability-based online panel during June 2011 and July 2011. At the end of the interview all respondents were asked whether they are willing to join a scientific online panel and fill out online questionnaires on a monthly basis. We varied the amount of promised incentive for each subsequent participation in online surveys as follows:
- 5 Euros per online survey
- 2 Euros per online survey
- No incentives (control)
The telephone numbers as well as the incentive groups were assigned randomly to the different interviewers. In sum, 702 interviews were completed by 16 interviewers.
First analyses confirm the tendency that higher incentives produce higher recruitment rates. Furthermore, we observe substantive between-interviewer-variance in recruitment success.
In this presentation we take a first step to analyze the combined effect of interviewer and amount of incentives on recruitment success. For the analysis we use information about sex, age and years of interview experience of the interviewers, as well as their own hypothetical willingness to participate in such an online panel. Furthermore we control for the three different incentive groups. As a result we assess the relevance of the two different aspects on recruitment success and derive practical implications for survey implementation.