Is Clean Data Better?
|Convenor:||Dr Frances Barlas|
|Affiliation:||GfK Custom Research|
Though many researchers have worked hard to improve survey response rates, there are challenges being posed by respondents who do respond and provide lower quality responses. Many researchers have argued that, to improve the quality of data from respondents, we should exclude cases that fail to meet a minimum quality standard from analyses. This session covers papers that provide an empirical evaluation of data cleaning techniques, including considerations of sub-optimal response such as speeding, non-differentiation on grid questions, failing trap or red-herring questions, lack of item completion, and extreme responding, as well as concerns about faked data and identity verification procedures. The aim of the session is to include studies that examine the impact of data cleaning on substantive responses or the extent of bias compared to national benchmarks. These studies will help inform best practice guidelines around data cleaning and considerations of data quality.
We encourage the submission of papers with a focus on:
• evaluating data cleaning techniques for impact on data quality
• survey accuracy and data validity resulting from data quality
• methods to identify poor survey responses and improve data quality