ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Innovations in the conceptualization, measurement, and reduction of respondent burden 2

Session Organisers Mrs Deirdre Giesen (Statistics Netherlands)
Dr Robin Kaplan (U.S. Bureau of Labor Statistics)
TimeThursday 20 July, 16:00 - 17:30
Room U6-20

In an era of declining response rates, increasing use of multiple survey modes, and difficulties retaining respondents across multiple survey waves, the question of how to better understand, measure, and reduce respondent burden is crucial. In official statistics, respondent burden is often conceptualized in terms of objective measures, such as the length of time it takes to complete a survey and the number of questions asked. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of how effortful the survey is, how sensitive or invasive the questions are, and how long the survey is. The level of burden can also vary by the mode of data collection, survey characteristics, demographic and household characteristics of respondents, and the frequency with which individuals or businesses are sampled. Ultimately, respondent burden is concerning because of its potential to increase measurement error, attrition in panel surveys, survey nonresponse, and nonresponse bias, as well as impact data quality. Building on the recent Journal of Official Statistics Special Issue on Respondent Burden, we invite papers on new and innovative methods of measuring both objective and subjective perceptions of respondent burden, while also assessing and mitigating the impact of respondent burden on survey response and nonresponse bias. We welcome submissions that explore the following topics:

• The relationship between objective and subjective measures of respondent burden
• Strategies to assess or mitigate the impact of respondent burden
• Quantitative or qualitative research on respondents’ subjective perceptions of survey burden
• The relationship between respondent burden, response propensity, nonresponse bias, response rates, item nonresponse, and other data quality measures
• Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
• Differences in respondent burden across different survey modes

Keywords: Respondent burden, data quality, item nonresponse

Papers

Response Burden and Dropout in a Probability-Based Online Panel Study – A Comparison between an App and Browser-Based Design

Dr Caroline Roberts (University of Lausanne) - Presenting Author
Dr Jessica Herzing (University of Bern)
Mr Marc Asensio Manjon (University of Lausanne)
Mr Philip Abbet (Idiap Research Institute)
Professor Daniel Gatica-Perez (Idiap Research Institute and EPFL)

Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.


Relationship Between Past Survey Burden and Response Probability to a New Survey in a Probability-Based Online Panel

Dr Haomiao Jin (University of Surrey)
Professor Arie Kapteyn (University of Southern California) - Presenting Author

An online panel is a sample of persons who have agreed to complete surveys via the Internet. By tailoring key respondent burden variables like questionnaire length and survey frequency, panel administrators can control the burden of taking surveys among panel participants. Based on common assumptions on the impacts of respondent burden, one may surmise that the experiences of long questionnaires and frequent surveys may overburden participants in panel studies and therefore decrease their propensity to complete a future survey. In this study, we conducted an ideographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. Findings of this study suggest that experiences of longer questionnaires and frequent surveys may not lead to a decreased response propensity to a new survey for the majority of participants in a large probability-based panel. The study advocates targeted interventions for the small subgroups of participants whose response may be negatively impacted by longer questionnaires and frequent surveys.


Modeling the Relationship between Proxy Measures of Respondent Burden and Survey Response Rates in a Household Survey

Dr Morgan Earp (US National Center for Health Statistics) - Presenting Author
Dr Robin Kaplan (US Bureau of Labor Statistics)
Dr Daniell Toth (US Bureau of Labor Statistics)

Respondent burden has important implications for survey outcomes, including response rates and attrition in panel surveys. Despite this, respondent burden remains an understudied topic in the field of survey methodology, with few researchers systematically measuring objective and subjective burden factors in surveys used to produce official statistics. This research was designed to assess the impact of proxy measures of respondent burden, drawing on both objective (survey length and frequency), and subjective (effort, saliency, and sensitivity) burden measures on response rates over time in the Current Population Survey (CPS). Exploratory Factor Analysis confirmed the burden proxy measures were interrelated and formed five distinct factors. Regression tree models further indicated that both objective and subjective proxy burden factors were predictive of future CPS response rates. Additionally, respondent characteristics, including employment and marital status, interacted with these burden factors to further help predict response rates over time. We discuss the implications of these findings, including the importance of measuring both objective and subjective burden factors in production surveys. Our findings support a growing body of research suggesting that subjective burden and individual respondent characteristics should be incorporated into conceptual definitions of respondent burden and have implications for adaptive design.


Reducing the respondent burden of income questions in a longitudinal study

Dr Tugba Adali (UCL Centre for Longitudinal Studies) - Presenting Author
Professor Emla Fitzsimons (UCL Centre for Longitudinal Studies)
Dr Nicolas Libuy Rios (UCL Centre for Longitudinal Studies)
Mr Matt Brown (UCL Centre for Longitudinal Studies)

Measuring income is an important feature of many social surveys but collecting an accurate measure can be challenging. The UK cohort studies all include detailed modules which cover each component of income, in addition to stand-alone measures of total take-home income. We know from respondent feedback that the income module is often perceived as burdensome.

Participants of the Millennium Cohort Study (MCS), a UK cohort of about 19,000 individuals born around 2000, will have reached early adulthood by the time of the next wave of data collection. It will be the first time we collect detailed income measures from the study members, and in the interests of longitudinal continuity, the measures we select at this baseline adult wave will be carried forward in future waves – so the choice of measure is important.

In this paper we present the findings of a pilot study which aimed to test the properties of different measures of income. The primary aim was to lower respondent burden by reducing the number of questions in the income module without impacting the accuracy of our estimates. A sample of 1000 21-30 year olds were allocated to one of four groups based on long/short modules and open/closed ended single income questions.

We will compare the income data collected via the long and short modules; examine how closely the single question estimates align with estimates from the long and short modules and whether banded or unbanded single item income questions perform better. We will also look at the impact of different measures on respondent feedback about their experience of completing the survey.