Conference Programme 2015
Tuesday 14th July Wednesday 15th July Thursday 16th July Friday 17th July
Tuesday 14th July, 14:00 - 15:30 Room: O-202
Interviewers’ Deviations in Surveys 2
|Convenor||Dr Natalja Menold (GESIS )|
|Coordinator 1||Professor Peter Winker (University of Giessen)|
Session DetailsInterviewers’ behavior might have an impact on survey data. The session deals with deviations from the instructions when contacting and interviewing respondents. These deviations might be caused by different factors such as task difficulty, interviewers’ ability, experience and motivation, but also by the quality of questionnaires and instructions. Deviations might result in bias, but could also foster data quality, e.g. if interviewers try to help the respondents in providing a (correct) answer. The session aims to discuss both, deliberate (e.g. falsifications; providing explanation) and non-deliberate deviations (e.g. interviewers’ mistakes).
Researchers are invited to submit papers dealing with all kinds of interviewers’ deviations in the survey process which might result in non-observation or measurement errors but also positively influence survey outcomes. Of interest are theoretical approaches and empirical studies on detection and prevention, on explanatory factors and consequences of interviewers’ deviations. Thus, interviewers’ motivation to deviate from prescribed standards or to produce high quality survey data as well as interviewers’ cognitive skills and competencies could be of interest.
Paper Details1. Interviewer Effects on Measurement Error
Mrs Daniela Ackermann-piek (German Internet Panel, SFB 884, University of Mannheim, Germany)
Professor Annelies G. Blom (German Internet Panel, SFB 884, University of Mannheim, Germany)
Interviewers are – intentionally or unintentionally – a potential source of survey errors in the data collection process. In the following paper we address whether interviewers introduce measurement error on substantive variables using data from PIAAC Germany. In our analyses we compare the variance introduced by interviewers on two types of data sets: direct measures of respondents´ competencies and background variables. The analyses are supplemented with interviewer characteristics and attitudes collected during an interviewer survey.
2. The Effect of Reading Numbers in Telephone Interviews on Response Behavior
Mrs Marieke Haan (University of Groningen)
Dr Yfke Ongena (University of Groningen)
In this paper deviations in question reading by interviewers are analyzed. In our first (behavior coding) study we found that interviewers spontaneously added numbers to response options. Due to this change in question reading, respondents appeared to be better able to formulate an answer. In our second (experimental) study we specifically tested the effect of reading numbers. Remarkably, no effects were found for reading numbers on response behavior. We present several explanations for the different findings between the two studies, and emphasize the importance of making surveys cohesive and coherent so deviating from standardized scripts is not necessary.
3. Effects of interviewer and respondent behavior on data quality: An investigation of question types and interviewer learning
Dr Antje Kirchner (University of Nebraska - Lincoln)
Dr Kristen Olson (University of Nebraska - Lincoln)
Despite attempts to fully standardize survey interviewing, interactions between respondents and interviewers are characterized by deviations from this protocol. Using structural equation modeling, this paper empirically examines the underlying latent factor structure for interviewer and respondent behaviors across different types of questions. We then assess how these factors affect data quality as measured by response latency and number of entry edits and how this relationship differs by question type. Finally, we investigate how these relationships change over the course of the data collection period, and are affected by sample composition and interviewers encountering less cooperative respondents.
4. Assessing interviewers’ reading out latencies for monitoring data quality
Ms Johanna Bristle (Munich Center for the Economics of Aging - MPISOC)
Dr Michael Bergmann (Munich Center for the Economics of Aging - MPISOC)
This paper analyses how process-generated paradata can be used to investigate interviewers’ deviations from standardized interviewing. Data comes from the Survey of Health, Ageing and Retirement in Europe (SHARE), a cross-national face-to-face survey. Derived from keystroke data, we investigate interviewers’ reading-out durations of items without respondent interaction. Based on the criteria of adapting behaviour to the respondent and learning effects over the course of fieldwork, three general interviewing patterns can be identified: standardized interviewing, tailoring, and speeding. We further test if the interviewing patterns matter for the obtained data quality.