ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 16:00 - 17:30, Room: No. 20

The use of respondent incentives in face-to-face surveys: Effects on response rates, survey error and survey costs

Convenor Mr Klaus Pforr (GESIS - Leibniz Institute for the Social Sciences)
Coordinator 1Mr Ulrich Krieger (MEA, Max Planck Institute for Social Law and Social Policy)
Coordinator 2Mr Michael Blohm (GESIS - Leibniz Institute for the Social Sciences)

Session Details

Decreasing response rates have become a major concern for face-to-surveys in modern societies in the last decades. To counter this downward trend, one possible and often used measure is respondent incentives. Incentives are used in a large variety of forms, modes and value, and survey modes.

While there is evidence for a positive effect of incentives on response rates, there is still debate on the effects on survey error. Respondent incentives may increase sample selectivity by attracting a specific subset of respondents over proportionally to the survey. Incentives may also systematically change answers by survey respondents, thus producing measurement error, by changing survey participants' perception of the study or their motivation for participation.
As incentives increase direct survey costs, research on the overall cost effectiveness of incentives is needed. Incentives may ease contact processes or increase data quality, thus save the survey enterprise fieldwork or data editing costs.

Contributions sought for this session will address one or more of following research questions:
* Effects of respondent incentives on survey outcome: How are contact, cooperation, response rates influenced by incentives?
* Effects of respondent incentives on sample composition and nonresponse bias. Do incentives differentially affect the response propensity of various subgroups of the population?
* Effects of respondent incentives on measurement error. Do respondent incentives change the response behavior during the interview?
* Are respondent incentives cost effective? Can savings in terms of fieldwork effort outweigh the direct expenses for incentives?
We want to focus the session on contributions from large-scale face-to-face surveys. We prefer results from experimental studies; however, all studies addressing the research questions are welcome.


Paper Details

1. Respondent incentives as one measure to increase response rates: The example of PIAAC in Germany

Mrs Silke Martin (GESIS - Leibniz Insitute for the Social Sciences, Mannheim (Germany))
Mrs Susanne Helmschrott (GESIS - Leibniz Insitute for the Social Sciences, Mannheim (Germany))

The Programme for the International Assessment of Adult Competencies (PIAAC) is a large cross-national survey organized by the OECD. Each participating country has to adhere to best practice standards and procedures to ensure the production of high-quality and internationally comparable data. One central standard for data inclusion in the international reports is a minimum overall response rate of 50%, provided that there are no serious levels of bias in the country data. As in various other countries, social surveys in Germany have been facing decreasing response rates over the time. One measure to improve the response rate in PIAAC Germany was the provision of a monetary respondent incentive. In this contribution we present results from the incentive experiment conducted in the PIAAC field test focusing on the effect on response rates and the subsequent implications for the PIAAC main study.


2. Optimizing incentives for Face to Face surveys in Switzerland

Mrs Michele Ernst Staehli (FORS, Université de Lausanne)
Mr Dominique Joye (Université de Lausanne)
Mr Dorian Kessler (FORS, Université de Lausanne)

This presentation gives an overview of 10 years of experiments with respondent incentives for long Face-to-Face surveys in Switzerland, such as the European Social Survey, and considers whether and how incentives could be used to minimize unit nonresponse error.
For academic-driven surveys Face-to-Face has been largely abandoned in Switzerland since the 80's in favour of CATI interviews, so that it was necessary to rebuild some experience in this field. As the response rates are generally rather low in this country, nation-specific 'best practices' had to be developed for the incentives.
In our experiences, we particularly varied the value of incentive and the length of the survey, and the form of the incentive (cash, checks, voucher, donations; unconditional and promised).
Our results support the findings that incentives are efficient for the survey organisation in terms of cost-benefit and that prepaid incentives are to be preferred to conditional ones. They also validate Dillman's social exchange theory rather than the idea of economic exchange.
However, going beyond the mere response rate, one should also consider the effect of incentives on nonresponse error. In fact, following the leverage-salience theory, incentives may not have the same effect for all groups. Having a register based sample since 2010, we can analyse our experiences thoroughly also with respect to the sample composition. The results will show if incentives, in addition to their response rate enhancing effect, can be used to improve to representativeness of the final sample.


3. Effects of incentives on response at subsequent panel waves - statistical matching evidence from the NEPS adult survey

Dr Corinna Kleinert (Institute for Employment Research (IAB))
Miss Barbara Erdel (Institute for Employment Research (IAB))

Longitudinal surveys impose a high burden on respondents and they are strongly dependent on their continuous co-operation. Thus it is not surprising that most panel studies are currently using incentives in the hope of increasing response and strengthen long-term participation. Most of these expectations are based on evidence from cross-sectional studies, whereas there is only limited knowledge of the complex effects incentives might have in panel designs. This is also true for the question how changes in incentive amount affect response in subsequent panel waves. High incentives are often paid in a later phase of fieldwork to secure co-operation of reluctant sample members. So far, there is only limited empirical evidence on the effect of such an end-game strategy (Lengacher et al. 1995). A similar strategy was adopted in the adult survey of the National Educational Panel Study in Germany: In the first panel wave, the conditional incentive was increased from €10 to €50 during fieldwork. In the subsequent wave, all respondents received €25. Since this strategy was realized in the whole sample, its effect on attrition in the second wave is empirically examined by using statistical matching. For this purpose, we rely on the rich set of data on respondents, interview experiences, and paradata from the first NEPS wave in order to form a comparison group of statistical twins with similar characteristics as the 50€ treatment group. Preliminary results suggest that the high incentive had no effect on decreasing later response.


4. Influences of Incentives on Response Rates and Sample Selection - Evidence from the SOEP

Professor Juergen Schupp (SOEP/DIW-Berlin)
Professor Martin Kroh (DIW BERLIN)
Dr Denise Sassenroth (DIW Berlin)

The Socio-economic Panel (SOEP) conducted two experiments in recent years to test the impact of (in-kind or monetary) incentives, with the aim to show their impact on the first response to a survey as well as in already established longitudinal samples.
This paper addresses several questions: first, using a classic experiment that varies the incentive amount, the focus is on showing the direct impact of incentives on the propensity to respond to the survey and how it differs by treatment. Additional variables such as neighborhood characteristics and interviewer fixed effects are used to elicit potential heterogeneous effects of incentives in a first wave response.
Secondly the same sample is then used to determine effects of an incentive treatment in the long run, i.e. whether the second wave's response propensity is still influenced, where no additional incentive was provided.
Thirdly, the SOEP's longitudinal nature was used in an additional experiment, where incentives were tested on three different household dimensions: single respondent vs. multi respondent households; participating households who have non-respondents vs. those who have no non-respondents; and an interview mode variation (face-to-face vs. mail). An incentive was tested on these groups to see whether the different types of households require differential approaches when using incentives. This experiment was conducted in 2010 and 2011, such that longitudinal data can be used to further determine the impact of the incentives. Heterogeneous effects of the of the incentives in a long-term setting are tested.


5. Incentive experiments in the recruitment of a probanlity based online panel - Experiences from the German Internet Panel

Professor Annelies Blom (Mannheim University)
Mr Ulrich Krieger (Mannheim University)

Annelies Blom and Ulrich Krieger

The German Internet Panel (GIP) is based on a true probability sample of individuals living within households and it is therefore the first of its kind in Germany. In 2012 the recruitment of the GIP was conducted offline through face-to-face interviews. Subsequently, all household members were invited to participate in the online panel. After online registration interviewing takes place bimonthly on topics of political and economic behavior and attitudes.

Recruitment into the GIP consisted of various stages: the face-to-face household interview, mailed invitations to the online survey, reminder letters, a phone follow-up, and final mailed reminders. During the face-to-face phase we conducted an experiment with €5 unconditional vs. €10 conditional household incentives. In addition, an experiment with €5 unconditional personal incentives was conducted during the first mailed reminder.

In our research we examine the effects of experimental variation of incentives on nonresponse and nonresponse bias in the recruitment using georeferenced marketing data. Did incentives help recruiting a representative online panel or induce a bias in the panel population?
Moreover, the effect of incentives on active participation in the online study will be examined. Incentivised respondents may also transition more quickly and reliably to the web interview mode, thus requiring fewer written or telephone reminders.