ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     




Friday 17th July, 11:00 - 12:30 Room: O-202


Data Collection Management: Monitoring Bias in a Total Survey Error Context

Convenor Mr Brad Edwards (Westat )

Session Details

In the total survey error paradigm, nonsampling errors and their relationship to cost have been very difficult to quantify, especially in real time. This is especially vexing in surveys conducted by interviewers, because of their large labor costs. Recent advances in paradata processing and analysis offer an opportunity to address this problem in survey operations. (Kreuter 2013) For example, CARI data selected with known probabilities from a pretest could be used to produce estimates of questionnaire (specification) error, to make improvements to address the design problems, and to monitor error levels after changes are implemented in the main data collection phase of face-to-face or telephone surveys. (Hicks, Edwards, Tourangeau, et al. 2010). The additional cost of CARI coding and analysis could reduce the resources available to complete more interviews, but result in a net reduction in bias.
Another example: GIS data could detect likely interview falsification on 100% of the cases completed on face-to-face surveys, at much lower cost than other techniques. GPS data from face-to-face surveys can detect falsification as it happens, thereby improving quality and saving costs that could be directed elsewhere. The quality improvement could be estimated by comparing the level of falsification detected with GPS compared to the level detected by more traditional methods (e.g., mail return forms, telephone and in-person re-interviews, CARI coding). Data collection savings from this innovation could be estimated by comparing the GPS costs with the costs of detecting and remediating falsifiers using traditional methods..
This session will include presentations on recent developments in CARI, GPS, mobile technology, and call record data and on studies that detect bias associated with various data collection activities, informed by the TSE paradigm.

Paper Details

1. Quantifying Measurement Error
Mr Brad Edwards (Westat)
Dr Aaron Maitland (Westat)

Data from a pretest can be used to produce estimates of questionnaire (specification) error, to make improvements to address the design problems, and to monitor error levels after changes are implemented in the main data collection phase of a survey. We demonstrate the feasibility of this approach on a recent CAPI survey. Problems with the survey protocol for completing a life events calendar were discovered in the pretest. The protocol was changed for the main data collection. CARI data are reviewed from simple random probability samples of pretest and main data collection interviews to determine whether the change was effective.


2. Legal issues in recruitment and their likely impact on response: an example from the pilot for Life Study
Ms Darina Peycheva (UCL Institute of Child Health)

This presentation focuses upon recruitment to the national probability sample within the new UK birth cohort (Life Study, www.lifestudy.ac.uk), describing methodological interventions aimed at improving recruitment into the study and the continued engagement of participants. This methodological work forms part of an overall plan to understand the biases that can arise in the process of recruiting participants into a longitudinal study using a legally required “opt-in” approach. To mitigate the implications of the “opt-in” procedure, different strategies to encourage participation will be tested during a pilot phase using a split sample design.


3. Operations Management from a TSE Perspective
Ms Patty Maher (Survey Research Center, University of Michigan )
Ms Beth-ellen Pennell (Survey Research Center, University of Michigan )

This presentation will highlight examples of innovative methods for managing field operations and interviewer behavior within the Total Survey Error paradigm. Specifically, we will provide a wide range of examples of quality monitoring approaches from surveys conducted in a variety of contexts, including surveys conducted in transitional and developing countries. These examples will include new approaches in the collection and use of rich paradata (e.g., call records, audit trails), digital audio recording, ACASI, mobile technology as well as innovative uses of digital photography, GPS, and other anthropometric​ data collection methods. ​We will also provide examples of project dashboards.


4. Field Interviewer Travel Routes: Cost Control and Nonresponse Bias
Mr Brad Edwards (Westat)

Interviewers’ travel from their homes to respondents’ homes is the largest component of fieldwork costs. With GIS it is now possible to view travel routes in real time and to identify inefficient routing. From call records on interviewers’ connected devices, we can determine instantly whether interviewers are making visits at appropriate times of the day and days of the week, and can redirect interviewers to work on cases that have high priority. Using a recent example, I will show how these approaches can reduce costs and have the potential for reducing survey nonresponse for specific population groups.