ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 09:00 - 10:30, Room: No. 14

The evaluation of interviewer effects on different components of the total survey error

Convenor Professor Geert Loosveldt (Dept Sociology, K.U.Leuven)
Coordinator 1Professor Patrick Sturgis (School of Social Sciences, University of Southampton)

Session Details

Although there is a long tradition in the evaluation of interviewer effects in face-to-face survey interviews, some recent papers show a renewed interest in the subject . The reported results also make clear that interviewer effects are still a relevant topic in survey methodology. Characteristic to these publications is that they try to link interviewer effects with other components of the 'Total Survey Error'- framework [e.g. correlation between interviewer induced non-response bias and measurement error (Brunton-Smith et al., 2012); the role of interviewer experience on acquiescence (Olson at al., 2011); the relationship between nonresponse variance and interviewer variance (West et al., 2010)]. In this session about interviewer effects we want to continue this approach which evaluates the impact of interviewers on different types of selection and measurement errors (e.g. unit and item non response, the amount of information, social desirable answers and other types of response tendencies).


Paper Details

1. Item Nonresponse in Face-to-Face Interviews with Children

Dr Sigrid Haunberger (University of Applied Sciences Northwestern Switzerland)

Survey researchers increasingly collect data from children themselves if they are interested in the growing-up, perspectives, attitudes, beliefs, and behaviour of children. Proxy-reporting by parents or other caretakers is no longer seen as a suitable and satisfactory medium of data collection.
Item nonresponse in child surveys in general and specifically in standardized face-to-face child interviews, however, has received only limited attention until now.
The main aim of this paper is to answer the question if and how child and interviewer characteristics as well as the interview setting affect item nonresponse in standardized face-to-face interviews with children.
For this purpose we use data from the child longitudinal study conducted by the German Youth Institute, where children (ages 8-11) were standardized interviewed in three survey stages. We computed multilevel logistic models with the software HLM 7.0 to better disentangle interviewer from respondent effects.
Depending on what type of question, we found different effects for respondent and interviewer variables, as well as interaction effects between child age*interviewer age as well as child gender*interviewer gender. However, interviewer variance is mostly not significant.


2. Explaining interviewer effects on interviewer fieldwork performance indicators

Professor Geert Loosveldt (University of Leuven)
Mr Koen Beullens (University of Leuven)
Mr Dries Tirry (University of Leuven)

Generally fieldwork managers use some basic fieldwork performance indicators to evaluate interviewers during the fieldwork of a survey. These fieldwork indicators are related to the interviewer's task to contact the sample units and to obtain participation. Among others the interviewer's contact rate, non-response rate, refusal rate, mean number of contacts, workload and the time needed to finish the assignment are considered as relevant interviewer performance indicators in this context. A description of an ideal interviewer based on these fieldwork performance indicators is an interviewer who is willing to process a sufficient amount of interviews (workload), realizes a high response rate and contact rate and finishes the assignment within the agreed time. The implicit assumption is that these interviewers can contribute to the quality of the realized sample and data. Previous research makes clear that there are differences between interviewers in fieldwork performance. This means that there are interviewer effects on the fieldwork performance indicators. The main research question of the paper is about the relevance of some job related interviewer characteristics to explain these interviewer effects. Some of these job related characteristics are the interviewer's expectation about participation, the interviewer's evaluation of the remuneration, the possibility to do interviews for different surveys simultaneously and his or her workload for other surveys. During the fifth round of ESS in Belgium several of these interviewer characteristics were collected by means of an interviewer questionnaire. These data will be used in the analysis.


3. Do Interviewers influence respondent propensity to 'satisfice'?

Ms Gosia Turner (University of Southampton)
Professor Patrick Sturgis (University of Southampton)
Professor David Martin (University of Southampton)
Professor Chris Skinner (London School of Economics)

It is frequently asserted that a primary benefit of interviewer administered surveys, relative to those conducted via self-administration, is that interviewers can help to motivate respondents to provide accurate and well-considered responses. However, an under-acknowledged implication of this assumption is that interviewers may introduce an additional source of variability to survey estimates, insofar as they vary in their ability to motivate accurate responding. A primary determinant of respondent-driven measurement error is the level of cognitive effort applied in answering questions. While some respondents exert a great deal of time and effort in order to come up with an accurate response, others employ what Jon Krosnick has termed a 'satisficing' strategy, which enables them to provide an 'acceptable' response for the minimum possible effort. Such strategies include, but are not limited to, 'yeah' saying, choosing the first 'reasonable' response option presented in a list, agreeing with statements presented in the questionnaire, choosing a 'Don't know' option rather than providing a substantive answer, and rounding to the nearest integer for behavioral frequency questions. In this paper we investigate the extent to which interviewer characteristics are predictive of the extent to which respondents indulge in such satisficing response sets. We use cross-classified multilevel models applied to data from the UK National Travel Survey, which is linked to paradata on interviewer characteristics, attitudes and beliefs to identify the interviewer contribution to a range of measures of respondent satisficing.


4. Interviewer effects on permission rates for secondary respondents in a multi-actor survey.

Mrs Inge Pasteels (University of Antwerp)

In surveys with a multi-actor design, persons are sampled as members of a well-known population. During the contact attempt or during the interview of these so-called primary respondents, other persons who are related to them in a well-defined way, are selected as "secondary respondents". The main purpose of multi-actor surveys is obtaining information from these secondary respondents as well. Since only the primary respondent is selected directly and addresses are known, contact information about the secondary respondent is required and has to be given by the primary respondent. However, if primary respondents don't want interviewers or researchers to contact their relatives, selected as secondary respondents, or if they do not know the address, contact attempts are impossible. The lack of addresses or the refusal from primary respondents to approach secondary respondents, have to be considered as an additional survey outcome for secondary respondents in multi-actor surveys. Besides response-, co-operation-, contact- , refusal- and eligibility rates for primary or secondary respondents, also permission rates can be calculated given this additional survey outcome about obtaining addresses in a multi-actor survey. In this paper we will focus on the determinants of permission rates. Special attention will be given to the association between intra interviewer response rates and intra interviewer permission rates. Interviewer level variables as well as sample-unit level variables will be included in a multi-level analysis.