ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 16:00 - 17:30 Room: N AUD4


Adaptive and Responsive Designs in Complex Surveys: Recent Advancements and Challenges 3

Chair Mr Jason Fields (U.S. Census Bureau )
Coordinator 1Dr Asaph Young Chun (U.S. Census Bureau)
Coordinator 2Professor James Wagner (University of Michigan)
Coordinator 3Dr Barry Schouten (Statistics Netherlands)
Coordinator 4Ms Nicole Watson (University of Melbourne)

Session Details

Adaptive and responsive survey designs (Groves and Heeringa, 2006; Wagner, 2008) have attempted to respond to a changing survey environment that has become increasingly multimode, multiple data sources-driven and multilingual. The Journal of Official Statistics will be publishing in 2017 a Special Issue on Adaptive Design in complex surveys and censuses (Edited by Chun, Schouten and Wagner, forthcoming). In our efforts to address multiple challenges affecting the survey community and the fundamental interest of the community of survey methodologists to produce quality data, we propose a session of papers that discuss the latest methodological solutions and challenges in adaptive and responsive designs for complex surveys. We encourage submission of papers on the following topics of adaptive or responsive design:

1.Applied and theoretical contributions, and comparisons of variants of adaptive design that leverage strengths of administrative records, big data, census data, and paradata. For instance, what cost-quality tradeoff paradigm can be operationalized to guide development of cost and quality metrics and their use around the survey life cycle? Under what conditions can administrative records or big data be adaptively used to supplement survey data collection and improve data quality?

2.Papers addressing the following triple drivers of adaptive/responsive design: cost, respondent burden, and data quality. For instance, what indicators of data quality can be integrated to monitor the course of the data collection process? What stopping rules for data collection can be used across the phases of a multi-mode survey?

3.Papers involving longitudinal survey designs where data collection systems need to fulfill their panel focus and provide data for the same units over time, and leverage adaptive processes to reduce cost, reduce burden, and/or increase quality. For instance, how can survey managers best engage the complexity of issues around implementing adaptive and responsive designs, especially for panel surveys that are in principle focused on measuring change over time? How are overrepresented or low priority cases handled in a longitudinal context?

4.Papers involving experimental designs or simulations of adaptive survey design. For instance, experimental implementation of an adaptive design, especially those involving multiple data sources, a mixed mode of data collection or a cross-national design.

5.Papers that apply Bayesian methods to build adaptive designs. For example, adaptive designs where the design parameters are given priors and then updated as additional data are collected.



Paper Details

1. Measurement error in proxy measures of key survey variables to estimate, reduce, and adjust for nonresponse bias
Professor Andy Peytchev (University of Michigan)
Dr Emilia Peytcheva (RTI International)
Dr Matt Jans (University of California at Los Angeles)

High survey nonresponse provides substantial potential for nonresponse bias in population estimates. As a result, surveys increasingly rely on auxiliary information to (1) estimate nonresponse bias, (2) attempt to reduce nonresponse bias during data collection, and (3) use statistical models in weighting and estimation. All three rely on auxiliary data that are strongly correlated with key survey variables. Such data are rare in household surveys. We have designed and implemented the collection of proxy survey variables for nonrespondents, taking advantage of a two-phase interview design in a telephone interview survey. Proxy measures however, are subject to measurement error. We examine an aspect of the relationship between nonresponse and measurement error that is not well understood – the effect of measurement error in auxiliary data on addressing nonresponse. In particular, we pose the questions whether it affects data collection decisions related to minimizing nonresponse and nonresponse adjustments? We also propose correcting for measurement error, and demonstrate its application.

We embed proxy measures on the household and the selected respondent(s) in the screening instrument of the California Health Interview Survey. We include two key health questions (health conditions, asked at the selected respondent(s), and public health insurance, asked at the household level) to potentially use in the estimation, reduction, and adjustment for nonresponse bias. We evaluate the measurement properties and causes of measurement error in these questions and their impact on each goal. Preliminary results suggest significant underreporting of heath conditions in the screener for both the landline and cell phone samples, but this result varies across survey iterations. Such outcomes are likely to have a differential consequences for data collection decisions related to targeting cases, and statistical models for weighting and estimation. We examine the correlates of measurement error in the proxy measures, and attempt to devise adjustments for measurement error that will allow the use of these measures for reduction of nonresponse error.


2. Designs for Reducing Nonresponse Bias
Dr Roger Tourangeau (Westat)
Dr Michael Brick (Westat)

A recent paper by Groves and Peytcheva (2008) reported a meta-analysis that examined the relation between nonresponse rates and nonresponse bias. That study, as well as an earlier study by Groves (2006), reported a weak correlation between the two. One striking finding from the study was the large within-study variation in the estimates of nonresponse bias. If nonresponse bias is truly an estimate-level property and only weakly related to study-level characteristics, this implies that no single number—whether the response rate or more sophisticated indices like the R-indicator—can tell us much about the overall quality of the estimates. Similarly, it will be difficult to design study-level interventions to reduce nonresponse bias, since the biases vary widely across estimates and do not reflect characteristics of the study itself. We reanalyze the Groves and Peytcheva data and come to somewhat different conclusions about the empirical relation between nonresponse rates and nonresponse bias. We also consider how these results align with the theory of nonresponse. Our results suggest that strategies designed to increase response rates or to improve the representativeness of the sample may be worthwhile. We examine several such strategies. A key property of any successful intervention (such as changing the data collection protocol after subsampling nonrespondents for further follow-up) is that the intervention reduces the imbalance in the respondent set; we provide an example of one such strategy for a low-cost survey. We examine various reasons why the relation between nonresponse rates and nonresponse biases is not stronger and lay out the implications of our findings for survey design.


3. Adaptive and Responsive Survey Design: Looking Back Last 10 Years and Looking Forward Next 10 Years
Dr Asaph Young Chun (U.S. Census Bureau)

A rapidly changing survey environment today requires a nimble, flexible design that leverages multiple data sources possibly in a multi-mode data collection, produces high quality data and optimizes cost allocation around the survey life cycle. Adaptive and responsive survey designs have attempted to respond to such a changing survey environment (Groves and Heeringa, 2006; Wagner, 2008; Chun, Schouten, Wagner and Heeringa, forthcoming). The purpose of this paper is to discuss challenges and lessons learned since 2006 in adaptive and responsive survey designs and also address opportunities and challenges remaining in adaptive/responsive survey designs perhaps in the next 10 years. The paper leverages the latest articles that will be featured in a Special Issue on Adaptive Design to be published in 2017 by the Journal of Official Statistics (Edited by Chun, Schouten and Wagner, forthcoming), among others. Critical review of adaptive survey designs would help identify and advance best practices and relevant theories when it comes to building blocks of adaptive design, such as contact and response propensity modeling, tailoring data collection strategies to sample subgroups, rules of switching from one mode to another mode, optimal use of paradata and cost-quality tradeoff, among others.