ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     




Tuesday 14th July, 11:00 - 12:30 Room: O-202


Interviewers’ Deviations in Surveys 1

Convenor Dr Natalja Menold (GESIS )
Coordinator 1Professor Peter Winker (University of Giessen)

Session Details

Interviewers’ behavior might have an impact on survey data. The session deals with deviations from the instructions when contacting and interviewing respondents. These deviations might be caused by different factors such as task difficulty, interviewers’ ability, experience and motivation, but also by the quality of questionnaires and instructions. Deviations might result in bias, but could also foster data quality, e.g. if interviewers try to help the respondents in providing a (correct) answer. The session aims to discuss both, deliberate (e.g. falsifications; providing explanation) and non-deliberate deviations (e.g. interviewers’ mistakes).
Researchers are invited to submit papers dealing with all kinds of interviewers’ deviations in the survey process which might result in non-observation or measurement errors but also positively influence survey outcomes. Of interest are theoretical approaches and empirical studies on detection and prevention, on explanatory factors and consequences of interviewers’ deviations. Thus, interviewers’ motivation to deviate from prescribed standards or to produce high quality survey data as well as interviewers’ cognitive skills and competencies could be of interest.

Paper Details

1. Do formal survey data properties allow detection of falsified data?
Professor Ivan Rimac (University of Zagreb Faculty of Law Department of Social Work)
Dr Jelena Ogresta (University of Zagreb Faculty of Law Department of Social Work)

Previous studies on falsifications in survey data collection gave indication that fabricated data can be detected on basis of formal indicators. Most of these surveys were done on experimentally obtained data. The aims of this study were to apply set of formal indicators and analyze differences between real and falsified data detected in real setting.The data are derived from Household Need Assessment survey in Croatia. Through control process 24.7% of incorrect interviews were detected. Different statistical procedures were applied on formal data indicators, but no clear and consistent evidence of possibility of detecting falsified data were found.


2. Interviewers’ Abilities and the Quality of Responses
Professor Joerg Blasius (University of Bonn)

In this paper we discuss various strategies interviewers might employ to fabricate parts of their interviews. Among other strategies, interviewers could ask only one or two questions from a battery of items and then “generalize” these answers to the entire set. Our interest is twofold: We try to explain why interviewers fabricate parts of their interviews, and we estimate the effects caused by deviant interviewers. As an example we use the German Social Survey 2008 which belongs to the best data sets the social sciences have in Germany.


3. Detecting Fraudulent Interviewers by Improved Clustering Methods -- The Case of Falsifications of Answers to Parts of a Questionnaire
Professor Peter Winker (Justus-Liebig-University Giessen)
Mr Samuel De Haas (Justus-Liebig-University Giessen)

Falsified interviews represent a serious threat to empirical research. Applying cluster analysis to a set of indicators has been shown to allow identification of suspicious interviewers when a substantial share of all of their interviews are complete falsifications. The analysis is extended to the case when only a share of questions within all interviews provided by an interviewer are faked. Based on a unique experimental dataset it is possible to construct many synthetic data sets with the required properties. A bootstrap approach is used for evaluating the robustness of the method for varying shares of falsifications within interviews.


4. Explaining Political Action: A Comparison of Real and Falsified Survey Data
Mrs Uta Landrock (GESIS)
Dr Natalja Menold (GESIS)

Face-to-face interviews are an important mode of data collection. The interviewer plays a central role, but data falsification can seriously contaminate the data quality. We analyse differences between real and falsified data. Our database consists of two datasets: Real interviews and falsified interviews fabricated in the lab. We use both datasets for calculating multivariate analyses and compare the results. We model the effects of political values and attitudes on the political participation. The results are discussed in light of the theories of social cognition and interviewers’ motivation as well as with respect to identification of falsified data.