ESRA logo

ESRA 2019 glance program


Responsive and Adaptive Design (RAD) for Survey Optimization 1

Session Organisers Dr Asaph Young Chun (Statistics Research Institute, Statistics Korea)
Dr Barry Schouten (Statistics Netherlands)
TimeWednesday 17th July, 14:00 - 15:00
Room D04

This session is devoted to discussing an evidence-based approach to guiding real-time design decisions during the course of survey data collection. We call it responsive and adaptive design (RAD), a scientific framework driven by cost-quality tradeoff analysis and optimization that enables the most efficient production of high-quality data. The notion of RAD is not new; nor is it a silver bullet to resolve all the difficulties of complex survey design and challenges. RAD embraces precedents and variants of responsive design or adaptive design that survey designers and researchers have practiced over decades (e.g., Groves and Heeringa 2006; Wagner 2008). In this session, we present the papers that discuss any of the four pillars of RAD: survey process data and auxiliary information, design features and interventions, explicit quality and cost metrics, and a quality-cost optimization tailored to survey strata. The papers will discuss how these building blocks of RAD are addressed and integrated like those papers published in the 2017 JOS special issue on RAD and the 2018 JOS special section on RAD (Edited by Chun, Schouten and Wagner). We are fond of RAD ideas implemented for survey-assisted population modeling, rigorous optimization strategies, and total survey cost-error modeling.

This session will present RAD papers, involving applied or theoretical contributions. For instance, 1) what approaches can be used to guide the development of cost and quality metrics in RAD and their use over the survey life cycle? 2) which methods of RAD are able to identify phase boundaries or stopping rules that optimize responsive designs? 3) what would be best practices for applying RAD to produce high quality data in a cost-effective manner? and 4) under what conditions can administrative records or big data be adaptively used to supplement survey data collection and improve data quality?

Keywords: Responsive design, adaptive design, survey optimization, tradeoff anaysis, total survey error

Responsive and Adaptive Design for Survey Optimization

Dr Asaph Chun (Statistics Research Institute, Statistics Korea) - Presenting Author

The purpose of this paper is to discuss an evidence-based approach to guiding real-time design decisions during the course of survey data collection. We call it responsive and adaptive design (RAD), a scientific
framework based on cost-quality tradeoff analysis that enables the most effective production of high-quality data. The notion of RAD is neither new nor a silver bullet to resolve all the challenges of complex survey design. RAD embraces precedents and variants of responsive design and adaptive design that survey designers and researchers have practiced over decades. In this paper, we present the four pillars of RAD: survey process data and auxiliary information, design features and interventions, explicit
quality and cost metrics, and a quality-cost optimization tailored to survey strata. We demonstrate
how these building blocks of RAD may be addressed with illustrating examples. We address the remaining challenges and opportunities for the advancement of RAD, including several RAD ideas for future research.


Adaptive Survey Design PIAAC Experiment and International Implementation

Mr Thomas Krenzke (Westat) - Presenting Author
Dr Leyla Mohadjer (Westat)
Dr Minsun Riddles (Westat)
Dr Natalie Shlomo (University of Manchester)

Download presentation

The Program for the International Assessment of Adult Competencies (PIAAC) is a household survey conducted in over 30 countries to measure adults’ proficiency in literacy, numeracy and adaptive problem solving. The survey gathers information and data on how adults use their skills at home, at work and in the wider community. As household surveys face greater challenges associated with a steady and steep decline in response rate and an increase in cost, adaptive survey design has emerged as an approach to achieve their goals (e.g., maximizing response rates or minimizing nonresponse bias) by adjusting a certain part of the data collection based on outcomes in the middle of the data collection process. Taking advantage of the availability of paradata and the on-going advancement of adaptive data collection, we conducted an adaptive survey design experiment, uniquely designed to address the challenges of the PIAAC data collection process.

In this presentation, we will describe the experiment that occurred in the US PIAAC sample in 2017. Geographic area sampling units were split into treatment and control (traditional sample monitoring) groups. In the treatment group, throughout the data collection period, adaptive survey design strategies were used to manage the sample and target resources to reduce nonresponse bias and improve representativeness of the sample, as well as to reduce costs. We will describe strategies of refreshment samples, influence measures and case prioritization during the presentation. Following the successful outcome of the experiment, specifications were written for the international implementation. We will discuss the initial challenges and minimum requirements for this implementation to build upon for future cycles of PIAAC.


Does Mood and Topic Interest Predict Nonresponse? A Contribution to Declining Response Rates and Frustrated Respondents

Mr Alexandre Pollien (FORS)
Dr Oliver Lipps (FORS) - Presenting Author

To test the feasibility of switching a 1 hour face-to-face survey to the web, the 2017 European value study in Switzerland used different experimental conditions: In addition to the original 1 hour face-to-face survey, these conditions included six push-to-web matrix designed 1/2-hour surveys plus 1/2-hour follow-ups. As one hour turns out to be too long for web surveys, the 1/2-hour surveys including follow-ups may be a good alternative. If this design fails due to nonresponse to the follow-up or high item-nonresponse, multiple imputation methods to impute the missing matrix part and the missing items may be a promising post-survey adjustment.

This paper analyses item-nonresponse in the 1/2-hour matrix designed push-to-web surveys as well as nonresponse to their follow-up using information about the respondent's moods and topic interests. While topic interest questions were asked in the first part of the questionnaire, moods were asked at the end of the first part: whether the respondent feels interested, seduced, sceptical, frustrated, irritated, tired, bored, and so on. Results will show whether different moods and topic interests affect item nonresponse and/or unit-nonresponse on top of more traditional indicators for nonresponse. We hope to learn more about what kind of people create what kind of nonresponse to increase insights to selectivity both due to item-nonresponse and to unit-nonresponse.