All time references are in CEST
Innovations in the conceptualization, measurement, and reduction of respondent burden 1 |
|
Session Organiser | Dr Robin Kaplan (U.S. Bureau of Labor Statistics) |
Time | Thursday 20 July, 09:00 - 10:30 |
Room | U6-09 |
In an era of declining response rates, increasing use of multiple survey modes, and difficulties retaining respondents across multiple survey waves, the question of how to better understand, measure, and reduce respondent burden is crucial. In official statistics, respondent burden is often conceptualized in terms of objective measures, such as the length of time it takes to complete a survey and the number of questions asked. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of how effortful the survey is, how sensitive or invasive the questions are, and how long the survey is. The level of burden can also vary by the mode of data collection, survey characteristics, demographic and household characteristics of respondents, and the frequency with which individuals or businesses are sampled. Ultimately, respondent burden is concerning because of its potential to increase measurement error, attrition in panel surveys, survey nonresponse, and nonresponse bias, as well as impact data quality. Building on the recent Journal of Official Statistics Special Issue on Respondent Burden, we invite papers on new and innovative methods of measuring both objective and subjective perceptions of respondent burden, while also assessing and mitigating the impact of respondent burden on survey response and nonresponse bias. We welcome submissions that explore the following topics:
• The relationship between objective and subjective measures of respondent burden
• Strategies to assess or mitigate the impact of respondent burden
• Quantitative or qualitative research on respondents’ subjective perceptions of survey burden
• The relationship between respondent burden, response propensity, nonresponse bias, response rates, item nonresponse, and other data quality measures
• Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
• Differences in respondent burden across different survey modes
Keywords: Respondent burden, data quality, item nonresponse
Dr Philip Brenner (Utrecht University) - Presenting Author
Dr Lee Hargraves (University of Massachusetts Boston)
Ms Carol Cosenza (University of Massachusetts Boston)
The high demand for cost-effective survey designs has been an impetus for methodological and technological innovation in online and mobile surveys. Yet, taking advantage of these innovations may encounter an unintended consequence: high respondent burden. Long and complex self-administered surveys, such as those conducted on the Web or SMS, may cause fatigue and breakoffs that can harm data quality. Thus, we test a planned missing design— randomly assigning respondents to answer only a subset of questions to shorten the survey—to reduce respondent burden in Web and SMS administrations of the CAHPS Clinician & Group Survey (CG-CAHPS), a survey of patient experiences widely used by health care providers. Members of an online nonprobability panel were randomly assigned to one of three invitation and data collection mode protocols: email invitation to a Web survey, SMS invitation to a Web survey, or SMS invitation to an SMS survey. Within these three mode protocols, respondents were randomly assigned to a planned missing design, which shortened the survey by about 40 percent, or to a control group that received the survey in its entirety. We compare survey duration, breakoff and completion rates, and five key patient experience measures across conditions to assess the effect of the planned missing design across the three modes. We found that a planned missing design worked well with our Web survey, reducing survey duration and breakoff without changing estimates relative to the full-survey control condition. However, mixed findings in the SMS survey suggest that even shortened, 15-item surveys may be too long to substantially reduce respondent burden. We conclude with recommendations for future research.
Dr Tugba Adali (UCL Centre for Longitudinal Studies) - Presenting Author
Professor Emla Fitzsimons (UCL Centre for Longitudinal Studies)
Dr Nicolas Libuy Rios (UCL Centre for Longitudinal Studies)
Mr Matt Brown (UCL Centre for Longitudinal Studies)
Measuring income is an important feature of many social surveys but collecting an accurate measure can be challenging. The UK cohort studies all include detailed modules which cover each component of income, in addition to stand-alone measures of total take-home income. We know from respondent feedback that the income module is often perceived as burdensome.
Participants of the Millennium Cohort Study (MCS), a UK cohort of about 19,000 individuals born around 2000, will have reached early adulthood by the time of the next wave of data collection. It will be the first time we collect detailed income measures from the study members, and in the interests of longitudinal continuity, the measures we select at this baseline adult wave will be carried forward in future waves – so the choice of measure is important.
In this paper we present the findings of a pilot study which aimed to test the properties of different measures of income. The primary aim was to lower respondent burden by reducing the number of questions in the income module without impacting the accuracy of our estimates. A sample of 1000 21-30 year olds were allocated to one of four groups based on long/short modules and open/closed ended single income questions.
We will compare the income data collected via the long and short modules; examine how closely the single question estimates align with estimates from the long and short modules and whether banded or unbanded single item income questions perform better. We will also look at the impact of different measures on respondent feedback about their experience of completing the survey.
Mr Douglas Williams (U.S. Bureau of Labor Statistics) - Presenting Author
Mrs Sharon Stang (U.S. Bureau of Labor Statistics)
Ms Faith Ulrich (U.S. Bureau of Labor Statistics)
Questionnaire length is an often-used metric of survey burden. However, the relationship between survey participation and questionnaire length is generally weak. This is due to other factors that mediate how burden is perceived, such as survey interest, sponsor, topic, or intrinsic factors like respondent motivation. Additionally, survey researchers go to great efforts to minimize burden, while for sampled survey members, the decision to participate is made ahead of any experience with the survey. Despite this, questionnaire length can affect effort later in the questionnaire, resulting in satisficing, item-nonresponse, or survey breakoff. In this paper we examine the effect of increasing questionnaire length in an establishment survey on survey outcomes including unit response, item-nonresponse, and data quality. We explore this in the Business Response Survey (BRS), conducted by the Bureau of Labor Statistics. The BRS was designed to be a supplemental survey administered online to provide quick measurement of emerging issues affecting businesses. The survey has been administered yearly since 2020, with the number and complexity of survey questions increasing each year. Complexity was increased with the inclusion of questions that require calculation, or accessing business records. Despite a nearly three-fold increase in survey questions, the survey remained modestly short and response remained steady year to year at about 25 percent. We expand upon this, reporting on response distributions, by contact number, and establishment factors (e.g., size, industry), by survey length to examine any effects on data quality or breakoffs. This paper adds to the debate on how burden manifests as survey length and complexity increases.
Dr Dorottya Lantos (UCL)
Dr Dario Moreno Agostino (UCL)
Professor Lasanna Harris (UCL)
Professor George Ploubidis (UCL)
Mrs Lucy Haselden (UCL) - Presenting Author
Professor Emla Fitzsimons (UCL)
Measuring different aspects of mental health is a core aim of many social surveys. Mental health is often measured by using validated psychological scales but these scales are often comprised of many items and can therefore be burdensome for respondents to complete. As such it is important to understand whether abbreviated versions of validated scales could be used without negatively impacting the quality of measurement.
The Millennium Cohort Study is following the lives of around 19,000 individuals born around 2000. Tracking different components of mental health over time is a key aim of the study.
The next wave of data collection will launch in 2023 and will be the first time the cohort have been approached independently from their families. Maximising engagement is key to the long-term success of the study and so minimising respondent burden is vital.
This paper presents the findings of an online pilot study which sought to evaluate the performance of abbreviated versions of four scales measuring depression (PHQ), psychological distress (Kessler) and anxiety (GAD and Malaise) compared with longer versions of the scales. The study was conducted with a sample of 987 adults including a sub-sample of 375 young adults aged 18-39. We will explore the measurement properties of the scales, measurement invariance across age and sex and correlations between the short and long versions.
Findings will inform content decisions for the forthcoming wave of the Millennium Cohort Study but will be of interest for many other studies who wish to measure mental health whilst minimising respondent burden.