All time references are in CEST
Innovations in the conceptualization, measurement, and reduction of respondent burden 2
|Session Organiser|| Dr Robin Kaplan (U.S. Bureau of Labor Statistics)
|Time||Thursday 20 July, 14:00 - 15:30|
In an era of declining response rates, increasing use of multiple survey modes, and difficulties retaining respondents across multiple survey waves, the question of how to better understand, measure, and reduce respondent burden is crucial. In official statistics, respondent burden is often conceptualized in terms of objective measures, such as the length of time it takes to complete a survey and the number of questions asked. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of how effortful the survey is, how sensitive or invasive the questions are, and how long the survey is. The level of burden can also vary by the mode of data collection, survey characteristics, demographic and household characteristics of respondents, and the frequency with which individuals or businesses are sampled. Ultimately, respondent burden is concerning because of its potential to increase measurement error, attrition in panel surveys, survey nonresponse, and nonresponse bias, as well as impact data quality. Building on the recent Journal of Official Statistics Special Issue on Respondent Burden, we invite papers on new and innovative methods of measuring both objective and subjective perceptions of respondent burden, while also assessing and mitigating the impact of respondent burden on survey response and nonresponse bias. We welcome submissions that explore the following topics:
• The relationship between objective and subjective measures of respondent burden
• Strategies to assess or mitigate the impact of respondent burden
• Quantitative or qualitative research on respondents’ subjective perceptions of survey burden
• The relationship between respondent burden, response propensity, nonresponse bias, response rates, item nonresponse, and other data quality measures
• Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
• Differences in respondent burden across different survey modes
Keywords: Respondent burden, data quality, item nonresponse
Dr Andy Peytchev (RTI) - Presenting Author
Dr Emilia Peytcheva (RTTI)
Dr David Wilson (RTI)
Mr Darryl Creel (RTI)
Mr Darryl Cooney (RTI)
Mr Jeremy Porter (RTI)
There are substantial reasons to reduce the length of self-administered surveys, including to minimize nonresponse, to limit breakoffs, and perhaps even more than anything else, to reduce measurement error in the collected data (Peytchev and Peytcheva, 2017). Split Questionnaire Design (SQD) (Raghunathan and Grizzle, 1995) gives survey designers an option to achieve a complete dataset with all variables, without asking all questions from each respondent. In SQD, multiple splits of the questionnaire are created in a manner that allows all possible combinations of variables to be observed at least for part of the sample. The data for the omitted questions for each respondent are imputed. To propagate the uncertainty related to each imputed value, multiple imputation is employed.
Among the obstacles for full scale implementation on large-scale surveys is the need for development and comparison of alternative approaches to how SQD is implemented, and the need for its evaluation on survey data. Two critical steps in SQD are essential to its performance: creation of the splits and imputation of the omitted data. The project first developed two sets of questionnaire splits: (1) based on cognitive aspects of questionnaire design, and (2) balancing the cognitive considerations with the need to maximize correlations across modules to aid imputation. Then, data were deleted for randomly assigned groups and imputed for each set of questionnaire splits using two fundamentally different imputation approaches: (1) regression-based multiple imputation, and (2) weighted sequential hot deck multiple imputation. This 2x2 design was evaluated on data from the 2019 National Survey of College Graduates in the United States. The evaluation criteria include bias and variance in a variety of estimates, comparing the approaches to creation of the splits and the imputation method. We present the design, main challenges, and key results from this two-year study.
Dr Ting Yan (Westat) - Presenting Author
Mr Douglas Williams (Bureau of Labor Statistics)
Concerns about the burden that surveys place on respondents have a long history in the survey field. This article reviews existing conceptualizations and measurements of response burden in the survey literature. Instead of conceptualizing response burden as a one-time overall outcome, we expand the conceptual framework of response burden by positing response burden as reflecting a continuous evaluation of the requirements imposed on respondents throughout the survey process. We specifically distinguish response burden at three timepoints: initial burden at the time of the survey request, cumulative burden that respondents experience after starting the interview, and continuous burden for those asked to participate in a later round of interviews in a longitudinal setting. At each time point, survey and question features affect response burden. In addition, respondent characteristics can affect response burden directly, or they can moderate or mediate the relationship between survey and question characteristics and the end perception of burden. Our conceptual framework reflects the dynamic and complex interactive nature of response burden at different time points over the course of a survey. We show how this framework can be used to explain conflicting empirical findings and guide methodological research.
Dr Christopher Antoun (University of Maryland) - Presenting Author
Ms Xin (Rosalynn) Yang (University of Maryland)
Dr Brady West (University of Michigan)
Dr Jennifer Sinibaldi (Pennsylvania State University )
While most surveys prompt respondents to complete the entire questionnaire in one sitting, there may be potential benefits of dividing surveys into shorter parts (or modules) that a respondent can complete at different points in time at their convenience. However, the existing research does not compare different modular design techniques, nor how they can be implemented via smartphones. To address these questions, we first developed an Apple iOS smartphone app (“Smartphone Surveys”) that can deploy modular surveys and then conducted an experiment to compare different modular and non-modular formats to a conventional web survey. In total, 664 people -- recruited from a previous National Center for Science and Engineering Statistics survey and the Forthright online volunteer panel -- were randomly assigned to answer 65 questions about employment and the economy (divided into 7 modules) using one of four methods: 1. Modular, where all modules were available at once via the app; 2. Modular, with modules time-released (one module every other day) via the app; 3. Non-modular, with all questions administered at once via the app; and 4. Non-modular, using a standard web survey (the control group). We will compare the effects of these approaches on perceived burden as well as several indicators of response quality (missing data, straightlining, lengths of answers to open questions, and rounded answers). Although preliminary results indicate few differences between the modular and non-modular app-based approaches, we find some important differences between the app-based approaches and the web survey. For example, the app-based approaches led to higher quality data by two metrics (less straightlining, longer responses to open questions) than the web survey. In addition, respondents rated the app-based approaches as easier than the web survey, and this pattern holds in multivariable models adjusting for demographic variables.