ESRA logo

ESRA 2019 glance program


Understanding Nonrespondents to Inform Survey Protocol Design 3

Session Organisers Ms Brenda Schafer (Internal Revenue Service)
Dr Scott Leary (Internal Revenue Service)
Mr Rizwan Javaid (Internal Revenue Service)
Mr Pat Langetieg (Internal Revenue Service)
Dr Jocelyn Newsome (Westat )
TimeWednesday 17th July, 16:30 - 17:30
Room D20

Government-sponsored household surveys continue to face historically low response rates (e.g., Groves, 2011; de Leeuw & de Heer, 2002). Although an increase in nonresponse does not necessarily result in an increase in nonresponse bias, higher response rates can help reduce average nonresponse bias (Brick & Tourangeau, 2017).

One method to address nonresponse is to maximize response, but this approach is often at a high cost with mixed success (Tourangeau & Plewes, 2013; Stoop et al, 2010). A variety of intervention strategies have been used including: offering surveys in multiple survey modes; limiting survey length; offering incentives; making multiple, distinct communication attempts; and targeting messaging to the intended audience (e.g., Dillman et al, 2014; Tourangeau, Brick, Lohr and Li, 2017). A second method to address non-response involves imputations and adjustments after data collection is complete. (Kalton & Flores-Cervantes, 2003) However, the effectiveness of this approach largely depends on what auxiliary variables are used in the nonresponse adjustment models.

Although research has been done to understand nonresponse in surveys, there are still many unanswered questions, such as: What demographic characteristics distinguish nonrespondents from respondents? What socio-economic or other barriers may be contributing to a low response rate? Answering these and similar questions may allow us to tailor survey design and administration protocols to overcome specific barriers that lead to nonresponse. Reducing nonresponse may mean fewer adjustments and imputations after data collection.
This session will focus on understanding characteristics of nonrespondents, barriers to survey response, and how knowledge about nonrespondents can guide survey design protocol. Researchers are invited to submit papers, experiments, pilots, and other approaches on any of the following topics:
• Better understanding how nonrespondents differ from respondents.
• Understanding barriers to response for different subgroups.
• Understanding how nonresponse for different subgroups may have changed over time.
• Using knowledge about nonrespondents to design focused intervention strategies. These could include, but are not limited to tailored messaging, tailored modes of administration, distinct forms of follow-up, and shortened surveys.
• Designing survey protocols to increase response from hard-to-reach populations of interest.

Keywords: Nonresponse, Survey Protocol, Survey Design, Behavioural Insights

Online Panel Paradata versus Questionnaire Navigation Paradata as Predictors of Non-Response and Attrition

Mr Sebastian Kocar (Australian National University) - Presenting Author
Dr Nicholas Biddle (Australian National University)

In web surveys, paradata as process data describing the data collection could be categorized into two main categories: device-type paradata (e.g. device, browser), and questionnaire navigation paradata (e.g. mouse clicks, breaking off page). In addition to those two categories of web paradata, there is a separate class of paradata known as online panel paradata (e.g. survey invitations, surveys completed, panel attrition), which are still a less explored topic.
There are three key aspects related to survey errors and specific to longitudinal surveys and online panels: panel conditioning, panel attrition and survey non-response without subsequent attrition. Panel participation has already been investigated in the literature using device-type paradata and questionnaire navigation paradata. However, online panel paradata offer even more complex insight into the respondent’s participation behaviour over time. Longitudinal nature of panel paradata offer opportunities for analyses controlling for unobserved heterogeneity, in contrast to more traditional statistical methods for studying panel participation such as survival analysis, logistic regression, multiple linear regression, or classification and regression trees.
In this study, we investigated factors affecting participation rates, i.e. non-response and attrition rates, using panel participation paradata and questionnaire navigation paradata. We derived sets of longitudinal variables measuring respondent behaviour over time, such as survey outcome rates, consecutive waves with a particular survey outcome (e.g. response, refusal), and changes in within-survey behaviour over time (e.g. duration, break-offs). Using all the waves of Life in Australia (LinA) participation data for all recruited members of the panel, we tried to identify the best socio-demographic, panel behaviour and within-survey behaviour predictors of non-response and attrition. The purpose of the presentation is to compare the predictive power of panel paradata with the predictive power of questionnaire navigation paradata and to discuss how to mix those approaches to predict panel (non)participation accurately.


Using Representativeness Indicators to Evaluate the Impacts of Non-Response on Understanding Society Survey Dataset Quality

Dr Jamie Moore (Department of Social Statistics and Demography, University of Southampton) - Presenting Author
Professor Gabriele Durrant (Department of Social Statistics and Demography, University of Southampton)
Professor Peter Smith (Department of Social Statistics and Demography, University of Southampton)

We assess the impacts of non-response on Understanding Society (USoc) survey dataset quality. Non-response is problematic in surveys because non-respondent – respondent differences can cause estimate non-response biases compared to fully observed values. USoc is a major UK longitudinal survey on social, economic and health topics. In longitudinal surveys, the impacts of non-response on datasets across survey waves (i.e. sample attrition) are of interest, as are impacts during within wave data collection: some subjects are only interviewed after multiple attempts, so cost conscious designers must decide how many attempts to make to obtain acceptable dataset quality. To measure dataset quality, we use representativeness indicators (Coefficients of Variation of response propensities: CVs), which quantify subset-sample similarity in terms of variation in inclusion propensities estimated given an auxiliary covariate set. Low levels of variation imply low risks of biases in subset estimates. We assess how the USoc dataset changes across waves compared to the (non-response weighted) first wave dataset (other information on first wave non-respondents is not available), including computing partial CV variants to quantify auxiliary covariate associated impacts on datasets that could be targeted by method modifications to improve dataset quality. We also analogously consider (with wave) data collection, with a focus on whether datasets stabilise after fewer than the current number of attempts to interview subjects. In addition, we compare inferences from these analyses to similar inferences given changes in substantive survey covariates. Then, using our findings, we advise on future USoc data collection.