ESRA logo

ESRA 2023 Preliminary Glance Program


All time references are in CEST

Innovations in the conceptualization, measurement, and reduction of respondent burden 3

Session Organiser Dr Robin Kaplan (U.S. Bureau of Labor Statistics)
TimeThursday 20 July, 16:00 - 17:30
Room U6-20

In an era of declining response rates, increasing use of multiple survey modes, and difficulties retaining respondents across multiple survey waves, the question of how to better understand, measure, and reduce respondent burden is crucial. In official statistics, respondent burden is often conceptualized in terms of objective measures, such as the length of time it takes to complete a survey and the number of questions asked. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of how effortful the survey is, how sensitive or invasive the questions are, and how long the survey is. The level of burden can also vary by the mode of data collection, survey characteristics, demographic and household characteristics of respondents, and the frequency with which individuals or businesses are sampled. Ultimately, respondent burden is concerning because of its potential to increase measurement error, attrition in panel surveys, survey nonresponse, and nonresponse bias, as well as impact data quality. Building on the recent Journal of Official Statistics Special Issue on Respondent Burden, we invite papers on new and innovative methods of measuring both objective and subjective perceptions of respondent burden, while also assessing and mitigating the impact of respondent burden on survey response and nonresponse bias. We welcome submissions that explore the following topics:

• The relationship between objective and subjective measures of respondent burden
• Strategies to assess or mitigate the impact of respondent burden
• Quantitative or qualitative research on respondents’ subjective perceptions of survey burden
• The relationship between respondent burden, response propensity, nonresponse bias, response rates, item nonresponse, and other data quality measures
• Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
• Differences in respondent burden across different survey modes

Keywords: Respondent burden, data quality, item nonresponse

Response Burden and Dropout in a Probability-Based Online Panel Study – A Comparison between an App and Browser-Based Design

Dr Caroline Roberts (University of Lausanne) - Presenting Author
Dr Jessica Herzing (University of Bern)
Mr Marc Asensio Manjon (University of Lausanne)
Mr Philip Abbet (Idiap Research Institute)
Professor Daniel Gatica-Perez (Idiap Research Institute and EPFL)

Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.


Relationship Between Past Survey Burden and Response Probability to a New Survey in a Probability-Based Online Panel

Dr Haomiao Jin (University of Surrey) - Presenting Author
Professor Arie Kapteyn (University of Southern California)

An online panel is a sample of persons who have agreed to complete surveys via the Internet. By tailoring key respondent burden variables like questionnaire length and survey frequency, panel administrators can control the burden of taking surveys among panel participants. Based on common assumptions on the impacts of respondent burden, one may surmise that the experiences of long questionnaires and frequent surveys may overburden participants in panel studies and therefore decrease their propensity to complete a future survey. In this study, we conducted an ideographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. Findings of this study suggest that experiences of longer questionnaires and frequent surveys may not lead to a decreased response propensity to a new survey for the majority of participants in a large probability-based panel. The study advocates targeted interventions for the small subgroups of participants whose response may be negatively impacted by longer questionnaires and frequent surveys.


Should I Use Factorial or Conjoint Experiments to Evaluate Policy Designs? A comparison of estimates, respondent burden and understanding

Dr Keith Smith (ETH Zurich)
Mr Florian Lichtin (ETH Zurich) - Presenting Author
Professor Thomas Bernauer (ETH Zurich)
Professor Kay W. Axhausen (ETH Zurich)

Environmental, transport and social public policies are often quite complex – incorporating multiple instruments and regulations. Yet, public support varies substantially by instrument. Accordingly, recent empirical research has focused how policy designs (policy packages including diverse instruments) shape policy support (e.g. carbon taxation, mobility pricing), commonly adopting survey-embedded experimental designs, such as conjoint and factorial experiments. Yet, little remains known about potential differences between adopting conjoint and factorial experimental approaches to evaluate policy designs. Here, we adopt a novel methodological survey-embedded experiment to compare how policy preferences differ by factorial and conjoint designs. We explore perceptions of comprehension and burden associated with experimental design choices. We find similar patterns of policy instrument support across three experimental designs conditions, while noting that conjoint experimental designs were perceived to be more burdensome and somewhat more difficult to understand. Thus, we formulate a set of recommendations, whereby factorial experimental designs are suggested for policy research focusing on specific instruments (levels). While conjoint designs are suggested for research towards identifying policy packages (combinations of levels) and sub-group differences, leveraging increased power within such experiments.


Modeling the Relationship between Proxy Measures of Respondent Burden and Survey Response Rates in a Household Survey

Dr Morgan Earp (US National Center for Health Statistics) - Presenting Author
Dr Robin Kaplan (US Bureau of Labor Statistics)
Dr Daniell Toth (US Bureau of Labor Statistics)

Respondent burden has important implications for survey outcomes, including response rates and attrition in panel surveys. Despite this, respondent burden remains an understudied topic in the field of survey methodology, with few researchers systematically measuring objective and subjective burden factors in surveys used to produce official statistics. This research was designed to assess the impact of proxy measures of respondent burden, drawing on both objective (survey length and frequency), and subjective (effort, saliency, and sensitivity) burden measures on response rates over time in the Current Population Survey (CPS). Exploratory Factor Analysis confirmed the burden proxy measures were interrelated and formed five distinct factors. Regression tree models further indicated that both objective and subjective proxy burden factors were predictive of future CPS response rates. Additionally, respondent characteristics, including employment and marital status, interacted with these burden factors to further help predict response rates over time. We discuss the implications of these findings, including the importance of measuring both objective and subjective burden factors in production surveys. Our findings support a growing body of research suggesting that subjective burden and individual respondent characteristics should be incorporated into conceptual definitions of respondent burden and have implications for adaptive design.