ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 14:00 - 15:30 Room: N AUD5


Deviations and Fraud in Surveys - the Impact of Motivation and Incentives 2

Chair Professor Katrin Auspurg (LMU Munich )
Coordinator 1Professor Thomas Hinz (University of Konstanz)
Coordinator 2Dr Natalja Menold (GESIS)
Coordinator 3Professor Peter Winker (University of Giessen)

Session Details

Credibility of social science was repeatedly jeopardized by recent and spectacular cases of deviant behavior in conducting surveys or fraud in presenting survey based research results. Several times researchers published path-breaking results that turned out to be ‘too good to be true.’ Because the incentive system in science commonly rewards originality higher than accurateness, most probably, the detected cases of making up data or trimming results are only the tip of the iceberg.

What makes the situation in survey research even more complex is the fact that several actors are involved who have manifold incentives to manipulate data. These include the researchers, survey institutes, survey supervisors, interviewers and respondents. Contributions to the session will discuss the motivation, prevalence and implications of misbehavior of actors in survey research. Of interest are theoretical approaches and empirical studies on the motivation, detection and prevention of data manipulations. Strategies to detect fraud deserve specific attention, but we also welcome empirical work on causal mechanisms: Which conditions most likely trigger fraud? Which interventions could accordingly work?

Some examples along the survey process highlight possible topics for the session:

(1) Respondents often share an interest with the interviewers to save time by taking inaccurate short cuts in the questionnaires. Additionally, they are prone to provide false answers, for instance, if questions are sensitive. Both kinds of behavior yield inaccurate measurements.

(2) Interviewers often may have a high discretion on many decisions in the process of conducting a survey (e.g. when selecting households in a random walk sample, by shortening the interview time through steering the respondents to filter options in questionnaires, or making up interviews (partly) from the scratch). The motivation for deviant behavior can be influenced by factors such as task difficulty, interviewers’ ability, and experience, but also by the quality of questionnaires and instructions and other administrative characteristics of the survey.

(3) Survey institutes often operate commercially under high cost and time pressure. In order to fulfill their contractual obligations to their clients they might have incentives to change, for instance, complex screening procedures without documenting, to manipulate statistics on non-response or even produce (near) duplicates to satisfy quota.

(4) Finally, researchers in survey research can engage in questionable practices as well when they select cases and statistical models just in purpose to get most sensational results.

Paper Details

1. Interviewers‘ motivation, influencing factors and impact on data accuracy – results from an in-field experiment
Dr Natalja Menold (GESIS)
Professor Peter Winker (University of Giesen)
Mrs Uta Landrock (TU Kaiserslauten)

A psychological motivation theory (Motivation Intensity Theory) was used to operationalize interviewers’ motivation and to analyze its effect on interviewers’ accuracy when documenting interview process (para data). According to the MIT, the effort expended on a task - which determines the accuracy and persistence - can be predicted by (1) the subjective difficulty of the task at hand, (2) the potential motivation (‘‘upper bound of what people would be willing to do to succeed’’) in a given task, and (3) the self-assessed ability to cope with task demands. In our in-field experiment, we collected data on those factors before and after each contact attempt by an interviewer. As factors, which can affect interviewers’ motivation we considered interviewers’ payment and cooperation of respondents. These factors were varied in the experiment. Two payment schemes were used: payment per interview and payment per hour. To obtain a controlled experimental setting for respondents’ cooperation, the potential respondents were instructed to cooperate, to refuse, or to break off the interview. The results show that payment per hour increases interviewers’ motivation and that there are interaction effects between payment scheme and respondents’ cooperation on interviewers’ motivation. Furthermore, interviewers’ motivation is positively related with their accuracy.


2. “Curbstoning”: case study of an elaborate interviewer falsification scheme and new procedures to prevent interviewer fabrication
Dr Frederic Malter (Max-Planck-Institute for Social Law and Social Policy (MPISOC))

A particularly vexing problem for many face-to-face surveys is interviewer falsification, either by interviewers’ falsifying “just some” item responses or skipping items illicitly, or by so-called “curbstoning”. This means entire interviews were fabricated and the target person was never even contacted, the most drastic form of interviewer falsification. In this presentation I will first describe a very elaborate “curbstoning scheme” we discovered during the sixth wave of the Survey of Health Aging and Retirement in Europe (SHARE). The scheme was remarkable in a number of ways: the respective interviewers were initially selected to work for SHARE because of their commendable performance during the SHARE pretests and that of other studies. Next, the scheme was cleverly concocted to make detection through back-check procedures in place at the time very difficult. It involved a fairly small number of interviewers but ended up affecting a fairly large part of the net sample. So this case study suggests a high level of “criminal sophistication” by the involved interviewers. Finally, this type of “Ponzi scheme” affected gross sample units in a geographical cluster, making it a pattern of unit nonresponse of not-missing-at-random. This necessitated specific procedures to avoid sampling bias. Following our insights from this case study, my presentation will then showcase details on how we revised procedures and policies in the data production process of SHARE to prevent the occurrence of curbstoning in future waves. This involved changes in fieldwork monitoring and management, statistical control procedures (to be presented by my colleagues Bergmann & Schuller in the same session) and new protocols regarding collaboration between the various actors (SHARE Central, survey agencies and university teams). I will make recommendations as to how other large-scale f2f surveys could benefit from our experience.


3. Identifying fake interviews in a cross-national panel study (SHARE).
Dr Karin Schuller (Max-Planck-Institute for Social Law and Social Policy)
Dr Michael Bergmann (Max-Planck-Institute for Social Law and Social Policy)

Interviewer fabrication (“fake interviews”) is a problem in all interviewer-conducted surveys and repeatedly come up in the Survey of Health, Ageing and Retirement in Europe (SHARE), as well. While there are many variations and different reasons for interviewers deviating from properly administering the survey, in this project we will only deal with the most extreme deviation, i.e. interviewers’ fabrication of entire interviews.
The main aim of our project is to implement a technical procedure to identify fakes in computer administered survey data. In contrast to previous work that often used only few variables to identify fake interviews, we implement a more complex approach that uses variables from different data sources to build up a comprehensive mechanism in order to identify fake interviews. We use several indicators from CAPI data (size of social networks, avoiding follow up questions, number of proxy interviews, rounding in physical tests, extreme answering, straight-lining, number of “other” answers, number of missings) as well as paradata (interview length, number of interviews per day, number of contact attempts, cooperation rates). We combine these indicators using a multivariate cluster analysis to distinguish two groups of interviewers: a falsifier group and an honest interviewer group.
During the sixth wave of the Survey of Health, Ageing and Retirement in Europe (SHARE) we discovered a very elaborate team of falsifiers who faked a fairly large part of the net sample (see details in abstract submission from our colleague Frederic Malter in the same session). We use these known fakes as a kind of benchmark to check if our script is able to properly identify fake interviews. Thus, in comparison to most of the existing work so far, our study has the advantage to be based on a large dataset including information on actual fakes.
First results show that we are able to identify most of the faked interviews, while at the same time we are able to keep the number of “false alarms” small. Although most of the time we cannot be perfectly sure if an interview has been faked or not, our results can be used to provide survey agencies with a much more informed sample for back checking suspicious interviewers and interviews. By this, we hope that we can substantially improve the quality of our survey data.