ESRA logo

ESRA 2019 glance program


Undesirable Interviewer Behaviour and Data Falsification: Prevention, Detection and Intervention 2

Session Organisers Dr Joost Kappelhof (SCP)
Dr Roberto Bricenos-Rosas (GESIS)
Dr Caroline Vandenplas (KU Leuven)
Professor Geert Loosveldt (KU Leuven)
Dr Ineke Stoop (SCP)
Miss Céline Wuyts (KU Leuven)
TimeTuesday 16th July, 14:00 - 15:30
Room D19

Undesirable interviewer behavior can occur in various ways of which data falsification is an extreme example. In general undesirable interviewer behaviour can be defined as any interviewer behaviour that can have a negative impact on the data quality. Such behaviour can be unintentional or intentional. Undesirable behaviour can occur during the performance of various interviewer’s tasks. These tasks are related both to contacting, selecting and convincing the respondent to participate and to the interaction with the respondent during the interview. In the context of standardized interviewing these tasks must be performed according to some basic rules. Deviations from these rules can be considered as an important category of undesirable interviewer behaviour. Typical examples are interviewers who rush through the questionnaire forcing the respondents to satisfy or interviewers who anticipate clarification demands by suggesting an answers. They may also engaged in side conversations or fail to keep a neutral attitude. In the worst case, the interviewer will fill in an answer without asking the question. We can regard the latter as falsification also called curbstoning.
Notice that data falsification does not only need to be performed by the interviewers. Also field supervisors, survey agencies and even researchers may be involved in undesirable practices such as data fabrication. The data processing step during which ‘advanced’ data cleaning procedures are used seems to be very relevant in this context. There seems to be a grey zone between data cleaning and data fabrication.
We are interested in any papers discussing ways to detect and prevent undesirable behaviour or practices tackling both the theoretical aspects and the current practices. Examples of detection tools are interaction analysis, interviewer effects, unusual response patterns, partial duplicates, back checks and the use of paradata such as contact forms, key strokes or time stamps. Prevention techniques can be related to interviewer briefing and training, fair interviewer payment, early detection of suspicious cases through fieldwork monitoring, or the development of easy-to-administer and not-burdensome questionnaires. Papers may also consider the possible interventions after the detection of fraud or undesirable interviewer behaviour either during the fieldwork or post-survey.


Keywords: data falsification, interviewer behaviour

A Strategy for Promoting Desirable Interviewer Behaviour within the ESS

Dr Sander Steijn (Sociaal & Cultureel Planbureau) - Presenting Author
Mr Roberto Briceno-Rosas (GESIS)
Dr Joost Kappelhof (Sociaal & Cultureel Planbureau)
Miss Jannine van de Maat (Sociaal & Cultureel Planbureau)

As long as the ESS relies on interviewers for the data collection activities, desirable interviewer behaviour (DIB) is fundamental for achieving high-quality and comparable data. This paper discusses how the ESS aims to contribute to promoting DIB in a newly proposed work package in a way that is consistent with its targets and accordingly, minimize the errors introduced by interviewers that would affect the quality of resulting survey data.
The work package attempts to mitigate factors that originate, promote or facilitate undesirable interviewer behavior (UIB). It distinguishes between two ways in which interviewers might affect data quality: selection & recruitment (affecting representation), and interviewing (affecting measurement). Three domains where DIB can be promoted are identified: (1) planning and preventing, (2) monitoring and detecting, and (3) assessment and evaluation.
In the first domain efforts are developed to promote DIB before the start of fieldwork. This entails a review of country-specific issues, a revision of specifications provided to countries, a revision of instruments that are currently used in the ESS (contact forms and ESS questionnaire), and an examination of improvements in the planning of interviewer workforce and in the preparation of interviewers.
The second domain focuses on monitoring and aims to detect and deal with UIB during fieldwork. The objective of this domain is the early detection of both interviewer-controlled issues affecting recruitment and selection of respondents as well as interviewer-controlled issues affecting the interviews.
The last domain focuses on the assessment of the prevalence and impact of UIB after fieldwork. It entails an assessment of interviewer adherence to protocols for (a) recruiting and selecting respondents and (b) interviewing respondents, as well as the evaluation of measures for critical issues like falsification, and the impact of interviewer effects on survey estimates like the design effects.


Effects of Interviewers' Experience on Skipping Filter Questions and Interview Duration: Empirical Analysis Using IAB-BAMF-SOEP Survey of Refugees in Germany

Dr Yuliya Kosyakova (Institute for Employment Research (IAB)) - Presenting Author
Miss Christina Reisinger (Institute for Employment Research (IAB))
Miss Silvia Schwanhäuser (Institute for Employment Research (IAB))
Dr Joe Sakshaug (Institute for Employment Research (IAB))

Previous literature on undesirable interviewer behavior suggests that falsification strategy and “quality” of falsifications changes with interviewer experience. “Quality” of falsifications means that falsifiers succeed in fitting their fabricated data to marginal distributions found in real data but often struggle to reproduce more complex relationships (like those revealed by factor analyses or multivariate regression analyses). However, the results of prior research are inconclusive regarding the relationship between the interviewer experience and the undesirable interviewer behavior. Some studies have shown that interviewers with lower professional experience (general experience) tend to falsify a more significant portion of the assigned cases than their experienced colleagues do. Other studies reveal that interviewers’ effort in data falsification increases with interviewers’ survey specific experience gained during field period (survey experience). To our knowledge, no prior study has attempted to incorporate both types of experience and to test the relationship between both.
In this paper, we address this research gap by exploring the interviewer effects on filter questions and the length of the interview and whether these effects vary across interviewers’ general experience and interviewers’ survey experience. Although an analysis of interview length is a common indicator for assessing the deceptive behavior of interviewers, the literature is less clear-cut on whether the effect of the interviewer experience is driven by the interviewers’ fraud or arises due to increasing efficiency in administering a survey. That is why we additionally look at the skipping of the filter questions (triggering rate), which allows avoiding follow-up questions and shortening the interview.
Using actual fake interview data from a large-scale survey in Germany, we find that the field experience is associated with shorter interview length albeit only for the inexperienced interviewers. Experienced interviewers generally have more extended interviews while the interview length does not vary over the field experience. The triggering rate results will be reported in the paper.


Do Falsifiers Leave Traces? An Attempt to Identify Interviewer Falsifications by Questionnaire-Based Indicators

Ms Sandra Walzenbach (ISER, University of Essex) - Presenting Author

Previous research has shown that falsifications by interviewers can systematically distort analysis results and are therefore a potential threat to data quality. This is particularly true if interviewers use stereotypes about their respondents to fabricate data.
To deal with this issue, survey institutes and researchers most often use external control mechanisms to detect falsifications. As a mostly supplementary strategy questionnaire-based criteria have also been proposed.
The latter approach is based on the assumption that falsifiers act in line with rational choice theory and hence show certain response patterns that distinguish them from real respondents. However, there is still little systematic research on the validity of such indicators.
The study to be presented uses survey data on income inequality in Germany to discuss to what extent falsifiers‘ response patterns actually meet the theoretical expectations and are identifiable by them. A unique advantage of the data at hand is that we do not rely on artificial material created in laboratory situations but on authentic falsifications, which were identified by external control mechanisms and admitted by the responsible survey agency.
Among the tested indicators are the share of extreme categories ticked on ordinal response scales, item-nonresponse, the strategic use of filter questions to shorten the questionnaire and the omission of open answers. Apart from that, the available numerical answers were tested for their conformity with the Benford distribution.


An Economic Analysis of Interviewer Falsifications

Professor Joseph Sakshaug (Institute for Employment Research / University of Mannheim) - Presenting Author
Mr Lukas Olbrich (Institute for Employment Research)
Dr Yuliya Kosyakova (Institute for Employment Research)
Ms Silvia Schwanhäuser (Institute for Employment Research / University of Mannheim)

Deviant interviewer behavior can take different forms. The most severe type is
the falsification of complete interviews. Interviewer falsifications have been shown
to substantially influence the results of survey data. To detect and prevent this
behavior, it is necessary to understand motivations and strategies of falsifiers.
In this study, the focus is on the development of the falsifier’s behavior over time
which has been neglected in the literature. Therefore, an economic model based on
insights from the literature on interviewer falsification and criminology is combined
with Bayesian learning theory. The model predicts that an inexperienced falsifier
will lower the falsification effort over time after each non-detected falsified interview.
To the contrary, an experienced falsifier’s strategy will remain constant during the field period. Because of the trust in his prior information about his supervisor’s monitoring strategy, he will show little variance and no trend in his behavior. The model is tested using data which includes verified falsifications. To empirically evaluate the falsifier’s effort over time,
we rely on formal indicators developed to detect interviewer falsifications and construct
additional measures. Lastly, we test whether falsifying interviewers can be
detected by their deviating strategy through the field period.
Investigating the behavior over time has multiple advantages. Interviewers starting
to falsify during the field period can be detected by sudden shifts in the measures,
experienced falsifiers attract attention by their reduced variance, and inexperienced
interviewers raise suspicion by changes in the indicators.


Interview Speed Patterns versus Deviations from Standardized Interviewing Protocol in Audio-Recorded Interviews for (Early) Identification of Field Interviewers’ Harmful Interviewing Practice: A Test Case

Ms Celine Wuyts (Centre for Sociological Research, KU Leuven) - Presenting Author
Professor Geert Loosveldt (Centre for Sociological Research, KU Leuven)

The interviewing behaviour of survey interviewers has long been recognized as an important contributor to measurement error in survey data. We compare two possible approaches for identifying interviewers for whom interviewing practice may be harmful to data quality. The first approach is based on audio-recorded interviews, on the basis of which compliance with standardized interviewing protocol can be assessed. Behavioral codes capture actual interviewer behaviour in the interview process and can be applied and examined early in the data collection period, but coding is a time-intensive and skill-dependent activity. The second approach is based on interview timer paradata, from which interviewer-level interview speed indicators, as indirect measures of interviewer behaviour, can easily be computed over the course of the data collection period. This approach is essentially costless in the context of computer-assisted interviewing.

We test these two approaches for the Dutch-speaking subsample of interviewers employed in two survey rounds of the European Social Survey in Belgium. We make use of timer paradata collected in the European Social Survey since Round 6 and audio recordings that were coded for compliance with standardized interviewing protocol for this subsample of interviewers.

We observe that interviewers who score higher on an overall standardized interviewing deviations measure, derived from a simple checklist applied to one (early) audio-recorded interview, tend to contribute more to interviewer effects as observed at the end of the data collection. The interview speed indicators derived from timer paradata, however, have better predictive power, even when computed over the first four or five interviews. These initial findings suggest that timer paradata may be very useful for screening interviewers for closer monitoring. Audio recordings may be appropriate to remove or retrain interviewers with the worst interviewing practice early in the data collection. Given their high cost, however, resources may be more efficiently allocated when assessments of