ESRA logo

Back to Overview of Sessions

New Strategies of Assessing Data Quality within Interviewer-Administered Surveys 1

Coordinator 1Dr Laura Silver (Pew Research Center)
Coordinator 2Mr Kyle Taylor (Pew Research Center)
Coordinator 3Ms Danielle Cuddington (Pew Research Center)
Coordinator 4Dr Patrick Moynihan (Pew Research Center)

Session Details

International survey researchers are no strangers to the difficulties inherent in assuring high-quality data, particularly in a post-GDPR environment where access to audio files -- a key mechanism to verify the caliber of interviewing -- may be severely restricted. Moreover, closely monitoring or investigating every sampled case is unlikely given resource constraints (e.g., limited time, budget and capacity), driving researchers to base evaluations on aggregate measures of data quality, such as interview length (in its entirety or by sections), extreme item nonresponse and other related substantive and paradata indicators.

For survey practitioners, this raises a critical question: Which data-quality indicators are most valuable for identifying problems in the field -- and, by extension, low-quality interviewing? Are certain indicators better associated with identifying certain problems? And what thresholds are used to distinguish between a case worth analyzing and one requiring more investigation? More broadly, how do these issues play out across comparative data as well as between locations and modes?

Once potential problems are determined, identifying the best course of action to resolve the issue can be a challenge. Resolving the issue can involve anything from simple case deletion (with requisite re-weighting, as applicable) to deletion of all interviews by a conducted by an interviewer or observed by a given supervisor to complete re-fielding.

Taken together, the goal of this session is to bring together researchers to discuss the measures they use to assess data quality, the thresholds they apply and the actions they take to resolve problematic cases. Topics may include but are not limited to:

Assessing the validity of cases flagged as “low quality” across different indicators;
Setting thresholds for quality control – that is, what is “too short” or “too long” and how do you determine that across different countries, languages, and modes;
Research that tackles new and innovative ways to expose “curbstoning” and other practices that lead to low-quality data;
Methods used to verify proper in-home selection;
Strategies used to detect respondent confusion, satisficing, and discomfort;
Research focused on when evaluating when and how to replace low-quality data, including, issues of substitutions and implications for data quality and final data representativeness.

We will limit this particular session to face-to-face and telephone interviewing, rather than online interviewing. We invite academic and non-academic researchers as well as survey practitioners to contribute.