ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Falsification detection in times of crisis: Challenges, opportunities, and new directions 1

Session Organisers Mr Markus Bönisch (Statistics Austria)
Dr Eduard Stöger (Statistics Austria)
TimeWednesday 19 July, 14:00 - 15:00
Room U6-11

With response rates for sample surveys continuing to decline for face-to-face surveys, incentives for interviewers may increase to help improve production, and temptations may rise for interviewers to falsify data. Methods exist to identify interviewers for which further investigation about their cases is needed. The approaches include validation of finalized cases through calling the respondents and asking them to verify the contact by the interviewer, and there are other attempts to assess the data, such as interview data, paradata, and GPS data. This session is designed to raise aspects of retaining data quality in the presence of falling response rates, in particular to detect falsification suspicions through a data driven process, to defend suspicions, and to investigate. Presentations will provide emerging approaches to detect potential falsification. The session will include a variety of surveys (e.g., Programme for the International Assessment of Adult Competencies, PIAAC), from different countries, and sectors (government, private sector, university).

Keywords: falsification, data, quality

Papers

Detecting interview location without recording it: introducing Virtual Surrounding Impression

Dr May Doušak (Researcher) - Presenting Author
Dr Joost Kappelhof (Head of methodology, SCP)
Mr Roberto Briceno-Rosas (Researcher)

Well-trained interviewers are vital for collecting high-quality comparative data in face-to-face surveys. Even the best interviewers »hit a wall« with unwilling respondents or other challenges, such as personal or professional setbacks. When this happens, most interviewers communicate about the problems with the agency and find a solution.
But sometimes, individual interviewers choose to »fix« the problem by taking shortcuts in undesired interviewer behaviour (UIB), such as speeding, interviewing substitute respondents or even falsifying parts of the interview.
There are many tools for detecting UIB. For example, time stamps are suitable for detecting inconsistencies such as speeding, going back and forth the questionnaire (to »fill in the gaps«), parallel interviews, unlikely time frames or delays between interviews.
Interviewers can (and sometimes do) fool the time stamps by falsifying at an expected pace at home, interviewing substitutes and doing other, sometimes super-creative, things.
It would be easy to detect such behaviour if the agencies could record audio, photographs, video or GPS location via surveying device to check the interview surroundings. It is, however, complicated and ethically questionable to do so due to privacy issues.
We developed a »virtual surrounding impression« (VSI), which allows for generating unique fingerprints of location using WIFI, device and user information. While no actual location data (such as WiFi or GPS location) is ever saved, VSI allows us to detect the change of location while fully respecting the privacy of both respondent and interviewer.
A device- and user- unique »virtual surrounding impression« fingerprint, however, allows for detecting whether:
• a single interviewer has completed multiple interviews at the same location on their device
• the location has changed during a single interview
While (no) tool is best in all aspects, we can combine the power of location recording with the privacy concern by using a virtual surrounding impression.


Identification of Partial Interviewer Falsification in Panel Surveys

Mrs Silvia Schwanhäuser (Institute for Employment Research) - Presenting Author
Mr Jonas Beste (Institute for Employment Research)
Mr Lukas Olbrich (Institute for Employment Research)
Mr Joe Sakshaug (Institute for Employment Research)

Interviewer-administered surveys are, in many respects, seen as the gold-standard form of data collection. Interviewers play a vital role in achieving high data quality by contacting, identifying, and recruiting target respondents, answering their queries, and administering standardized interviews. However, some interviewers may be enticed to intentionally deviate from the prescribed interviewing guidelines and fabricate (parts of or entire) interviews. Such fabrication can lead to severe bias, especially in multivariate analyses. Hence, several studies have proposed data-driven methods for identifying complete falsifications. At the same time, current literature mostly neglects two important aspects: (1) How can researchers effectively detect different falsification forms in panel survey data? and (2) How can survey researchers detect partial falsifications?
The common notion in panel surveys is that falsifications are easy to detect, since inconsistent or implausible answers between waves could be flagged as suspicious. Nonetheless, information on the concrete implementation of “between waves” checks, as well as evaluations on the effectiveness of such checks for detecting falsifications are missing from the literature. Further, previous literature lacks methods targeted on partial falsifications for both longitudinal and cross-sectional data.
In the present case study, we aim to close these gaps by examining whether we can effectively identify partial falsifications in the German Panel Study “Labour Market and Social Security” (PASS), which included verified cases of interviewer misbehaviour and partial falsifications. First, we assess whether established statistical detection methods and falsification indicators also succeed in identifying partial falsifications. Second, we test the common notion that falsifiers produce inconsistencies between different waves of data collection. Results indicate that various data-driven methods aide in identifying partial falsifications in longitudinal and cross-sectional data. Altogether, the results of this study inform how survey researchers can improve their quality control procedures for panel surveys.


Detecting falsified interviews in a longitudinal survey

Mr Andreas Franken (German Institute for Economic Research (DIW Berlin)) - Presenting Author

With Covid-19, regulations and social behavior effects increased the burdens for interviewers. For example, interviewees were less willing to let the interviewer into their home, because they were afraid to become infected. Even more so, for some time it was prohibited to let non-household members in one’s apartment. This increased pressure on interviewers resulted in more divergences from the formal survey process, ranging from skipping items or questions to falsifying whole interviews.
The German Socio-Economic Panel (GSOEP) is a longitudinal survey with a repeating questionnaire on household and individual level, which started in 1984. It is mainly based on face-to-face interviews and has routines to detect striking, potentially falsified interviews. These routines are built on classic statistic indicators for detecting falsified interviews, such as analysis of the Benford distribution or time stamps. To cover a broader range of indicators for detecting falsified interviews, several supervised and unsupervised machine learning algorithms were tested. For the training and testing of these algorithms, over 500 interviews detected and validated as falsified were used. To build the features of the algorithm, different kinds of data were prepared, i.a. paradata (like time stamps), data of the response behavior (like usage of filter questions, or the Benford distribution) and interviewer information (like the region where the interviewer is operating).
Preliminary results show that it is possible to predict falsified interviews with machine learning. Machine learning approaches thus can facilitate both validation and identification of falsified interviews in future survey research. Various issues remain for discussion, amongst others the small number of validated falsifications and falsifying interviewers, the difficulty to differentiate between a bad quality interview and a real fraud, or the changing amount of data over the years.