ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Undesirable Interviewer Behaviour and Data Falsification: Prevention, Detection and Intervention 1

Session Organisers Dr Joost Kappelhof (SCP)
Dr Roberto Bricenos-Rosas (GESIS)
Dr Caroline Vandenplas (KU Leuven)
Professor Geert Loosveldt (KU Leuven)
Dr Ineke Stoop (SCP)
Miss Céline Wuyts (KU Leuven)
TimeTuesday 16th July, 11:00 - 12:30
Room D19

Undesirable interviewer behavior can occur in various ways of which data falsification is an extreme example. In general undesirable interviewer behaviour can be defined as any interviewer behaviour that can have a negative impact on the data quality. Such behaviour can be unintentional or intentional. Undesirable behaviour can occur during the performance of various interviewer’s tasks. These tasks are related both to contacting, selecting and convincing the respondent to participate and to the interaction with the respondent during the interview. In the context of standardized interviewing these tasks must be performed according to some basic rules. Deviations from these rules can be considered as an important category of undesirable interviewer behaviour. Typical examples are interviewers who rush through the questionnaire forcing the respondents to satisfy or interviewers who anticipate clarification demands by suggesting an answers. They may also engaged in side conversations or fail to keep a neutral attitude. In the worst case, the interviewer will fill in an answer without asking the question. We can regard the latter as falsification also called curbstoning.
Notice that data falsification does not only need to be performed by the interviewers. Also field supervisors, survey agencies and even researchers may be involved in undesirable practices such as data fabrication. The data processing step during which ‘advanced’ data cleaning procedures are used seems to be very relevant in this context. There seems to be a grey zone between data cleaning and data fabrication.
We are interested in any papers discussing ways to detect and prevent undesirable behaviour or practices tackling both the theoretical aspects and the current practices. Examples of detection tools are interaction analysis, interviewer effects, unusual response patterns, partial duplicates, back checks and the use of paradata such as contact forms, key strokes or time stamps. Prevention techniques can be related to interviewer briefing and training, fair interviewer payment, early detection of suspicious cases through fieldwork monitoring, or the development of easy-to-administer and not-burdensome questionnaires. Papers may also consider the possible interventions after the detection of fraud or undesirable interviewer behaviour either during the fieldwork or post-survey.


Keywords: data falsification, interviewer behaviour

Cross-Country Differences in Briefing Implementation in the European Social Survey – An Explanatory Study.

Mr Niccolo Ghirelli (European Social Survey) - Presenting Author
Mr Rory Fitzgerald (European Social Survey)

Background and aims: It is known that interviewer characteristics and behaviour can cause unwanted ‘interviewer effects’ and that these differences can be magnified in cross-national surveys. In the European Social Survey (ESS), a number of efforts have been made to standardize interviewer behaviour by trying to standardize briefings and instructions. However, challenges remain especially as ESS conducts fieldwork only every second year and it must arrange interviewers to conduct its surveys by commissioning a survey agency or by hiring interviewers especially for the ESS survey. Monitoring of interviewer effects across the ESS has mainly been performed using quantitative methods by listing the briefing materials used in every participant country and analyzing the interviewer effects in the ESS data. This paper aims to complement that work through observations of a small number of interviewer briefings across different countries (United Kingdom, Ireland and Italy, with the possible addition of Latvia, Montenegro and Slovakia).

Methods: Shadowing and participant observation are the main methods used. One briefing per country was attended to observe the differences in approach, topics and materials between countries. Where possible the briefing was audio-recorded to allow further examination.

Results: The expected results will consist of a preliminary overview of the possible cross-national differences concerning the interviewers briefing and the data collection. The results will be used to develop code frames to be applied in a larger range of countries in future rounds.

Conclusions: The study has an explorative nature and the small number of cases considered is a clear limitation. Despite this, its aim is to be a starting point for more complex and thorough monitoring in later rounds of ESS and an opportunity to verify the implementation of ESS interviewer briefing protocols in future.


Measuring Interviewer Compliance with Regard to Question Deviations in a Multi-Language Survey in Zambia

Mrs P. Linh Nguyen (University of Essex - University of Mannheim) - Presenting Author

International development projects are usually evaluated on their impact by using survey data as evidence. Due to limited outreach of telephone and internet devices and infrastructure, interviewer-administered face-to-face (F2F) surveys are and will remain the principal data collection tool in developing countries. Although survey data is frequently collected, the common practices for questionnaire pre-testing, as well as systematic evaluation of the interview administration process, have yet to be established broadly among practitioners in non-Western countries. In this light, this study presents new empirical insights about the quality of survey data collected in a Zambian setting. The existence of multiple ethnicities and multiple languages in Zambia, like in the majority of African countries, pose challenges to any data collection as the interviewer is multilingual and thus, able and sometimes asked to do on-the-spot translation in a different language. The analysis draws on data from a face-to-face survey on standards of living, economic situation and financial behaviour in rural or semi-urban areas of Zambia. It was conducted in 2016 with more than 2000 members of selected collective savings groups who are beneficiaries of a development programme. The focus of this investigation is to explore to what extent interviewers comply in delivering five selected factual and attitudinal questions in a standardized manner given the multi-lingual context. The questionnaire was translated from English into the respective dominant local language of the three survey locations. About 4,000 interviewer-respondent-interactions on those questions were coded following an interaction coding scheme to study whether questions were administered following the pure standardized interviewing approach or whether there were minor or major deviations. The results from the interaction coding, as well as methodological issues in the development of the coding scheme and intercoder reliability, shall provide a base for future improvements in interviewer training adjustment or


Interviewer Training as a Means of Preventive Measure for Good Data Quality

Ms Jennifer Weitz (infas Institute of Applied Social Sciences) - Presenting Author

Survey organizations have to ensure that errors and effects are minimized by validating their data collection processes during the entire survey period. Special attention is paid to interviewer effects, which directly and indirectly influence data quality. Interviewer training is a first and an especially important step to minimize errors.
In general, training should always include modules such as the explanation of the planned study, the questionnaire and the documentation of the answers (Stieler / Biedinger 2015, Schnell et al., 2011). In addition, project-specific training should always focus on the specifics and difficulties of each study.
The main interest of the National Educational Panel Study (NEPS) is to describe educational processes across the lifespan of individuals, thus identifying causes and cause-effect relationships. Therefore, data collection focuses on the initial recording and updating of life histories (dependent interviewing), which makes special demands on the interviewers. Among other things, the challenge is to allocate life-course episodes according to given rules to the correct subject modules of the survey instrument. Hence, the interviewers must be able to switch between standardized and conversational interviewing throughout the questionnaire. Therefore, support by interviewers includes special demands regarding interaction, the content background knowledge of certain modules and the set of rules of various questionnaire segments. For this reason, intensive and problem-oriented training modules are essential in NEPS’s project-specific interviewer trainings.
This paper observes the effect of customized training design to correctly allocate reported episodes into the appropriate life-course modules using descriptive and multivariate methods. For the first time, life course episodes of two sub-studies collected in two different survey waves of the NEPS kick-off cohort 4 were used for the analysis. This data basis was chosen because prior to interviewing the second subsample, the element of capturing life history information was significantly changed in the training design.


Interviewer Falsification: Systematic Comparison of Statistical Identification Methods

Miss Silvia Schwanhäuser (Institute for Employment Research (IAB) ) - Presenting Author
Professor Joseph Sakshaug (University of Mannheim and Institute for Employment Research (IAB))
Dr Yuliya Kosyakova (Institute for Employment Research (IAB))

All interviewer-administered surveys are susceptible to intentional deviant behaviour by one or more interviewers. In the worst case interviewers may fabricate entire interviews, which can have a strong negative impact on the data quality. Therefore, detecting falsified interviews is an important part of ensuring data quality.

Even though the literature on falsification has increased in recent years, there are still many research gaps. One of these research gaps concerns the utility and sensitivity of different statistical identification methods proposed in the literature. Overall, the number of proposed and tested identification methods is quite considerable. Most of these methods are based on falsification indicators, which attempt to measure systematic differences between real and falsified data. Suggested indicators are numerous but can basically be differentiated between formal indicators (analysing the response behaviour), content related indicators (analysing the distribution of different items) as well as indicators on para-data (analysing differences within the para-data). Some authors use single indicators or compare results of multiple indicators. Other authors propose statistical methods which combine multiple indicator values (e.g. cluster analysis) to attempt to identify “at risk” interviewers. However, it is rare to find empirical evaluations of these methods in real-world settings, and systematic and applied comparisons of these methods is simply missing from the literature.

We aim to compare existing as well as new indicators and statistical methods for identifying faked data, based on their utility and sensitivity. Specifically, we make use of large-scale, nationally-representative survey data sources with known fake interviews, and test various falsification detection methods applied to household- and person-level interviews, interviewer observations, as well as para-data. Through this comparison we determine which methods are best suited for future quality assurance.


Managing (Undesirable) Interviewer Behavior via an Integrated System of Data Based Indicators

Mr André Müller-Kuller (Leibniz Institute for Educational Trajectories) - Presenting Author
Ms Hanna-Rieke Baur (Leibniz Institute for Educational Trajectories)

Managing interviewer behavior requires a framework that not only focuses on controlling and documenting fieldwork, but includes a quality processing perspective. In this sense, we currently develop an integrated system of indicator based measures that take place before as well as during fieldwork (referred to as interviewer training, behavior, performance). This allows to detect and manage (undesirable) interviewer behavior and thereby optimize the data quality. Based on a general framework, specific measures can be adaptively deployed according to specific characteristics of a survey (e.g. mode, setting, content).
For this purpose, we will apply a methodological guided assemble of key indicators and seek for an extended use of corresponding data sources, in particular paradata. Beside general performance indicators, like participation and cooperation rates, number of (successful) contact attempts or overall engagement, influential behavior (patterns) should be indicated (e. g. suspicious interview durations, prominent break offs, dubious commitment rates). In addition, survey data in particular provide clues on interviewers' behavior based on sensitive data (e.g. health, income, attitudes). Last but not least, we started analyzing interviewer assessments as well as interviewers´ feedback about the training they receive and training-related problems that occur during fieldwork in order to make use of their expert knowledge from daily practice.
Within the National Educational Panel Study (NEPS) various types of (para)data are collected that we analyze at the moment. In this presentation we would like to present and discuss the framework of our quality processing perspective, some key indicators of undesirable interviewer behavior as well as management measures that include interventions during fieldwork and prevention techniques that take place already before (e.g. optimizing and individualizing trainings based on interviewers´ experience and feedback).