ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Reducing and measuring nonresponse bias in times of crisis: Challenges, opportunities, and new directions 1

Session Organiser Mr Tom Krenzke (Westat)
TimeWednesday 19 July, 09:00 - 10:30
Room U6-01e

With response rates for sample surveys continuing to decline, much focus has been on reducing and measuring nonresponse bias. The COVID-19 pandemic, the geo-political conflicts and humanitarian crisis have made it difficult to change the state of response rates. Methods exist for reducing nonresponse bias during data collection and after. The approaches include offering incentives for respondents and interviewers, hiring tips, training, use of outreach materials, and nonresponse strategies such as reassigning cases, tracking progress, and communication. Methods also exist that help to measure nonresponse bias to gauge the quality of the collected data. These methods include comparing demographic distributions before and after nonresponse adjustments, comparing respondent distributions to frame distributions, computing correlations between weighting variables and outcome variables, and conducting a level of effort analysis. Presentations will provide emerging approaches to improve response rates, and to reduce and measure nonresponse bias. The session will include a variety of surveys (e.g., Programme for the International Assessment of Adult Competencies), from different countries, and sectors (government, university).

Keywords: Total survey error, analysis, adaptive survey design

Papers

As mother like daughter? Non-response in Home Questionnaires Assessed through Student Responses

Mr Rune Müller Kristensen (Aarhus University) - Presenting Author

Non-response rates are a problematic indicator of non-response bias in surveys, but still often used as legitimization of survey quality. Studies from the last decades have shown, that quality judgement should be based on the covariance between the propensity of being a non-respondent and the survey variables of interest. As this information is seldom available for surveys due to the non-response phenomenon itself, studies taking this into account often argue for survey quality with reference to other studies. Knowledge about the generalizability of these measures across levels of non-response rates and social context are however limited, as international studies seldom have information about the non-respondents.
One place, where non-response bias is present is in Home Questionnaires (HQ) included in International Large-Scale Assessments (ILSAs) which are well-known to cause non-response bias. Strict standards are set to ensure high data quality in these studies, still HQs are not included in the quality assurance, as they are not a mandatory part of study participation for countries. ILSA studies however provide some information on non-respondents, as information on the child’s ability score and answers to context questionnaire (CQ) items are normally available.
To gather information on to which extend generalizations can be made about the severity of non-response from one study to another, the presentation will exploit the last three waves of the TIMSS study to empirically scrutinize 1) the variation in non-response-rates in HQs across countries and cycles of the study, 2) correlation between non-response and different survey variables measured in the CQ, 3) whether variation in bias seems to be related to CQ survey items content and 4) whether these seems to be country and/or cycle specific.


The risk of nonresponse bias in online and hybrid surveys

Ms Blanka Szeitl (Centre for Social Sciences, Institute for Sociology) - Presenting Author
Ms Vera Messing (Centre for Social Sciences, Institute for Sociology)
Mr Bence Ságvári (Centre for Social Sciences, Institute for Sociology)

As online and hybrid data collections are becoming more and more prevalent, there is also a growing need to analyse methodological dimensions such as the risk of nonresponse bias. The study applies the classical estimation methods of the risk of nonresponse bias to online and hybrid surveys on Hungarian data. One aim of the study is to compare online, and hybrid surveys based on the risk of nonresponse bias using several methods: R-indicator, Nonresponse Model, Variation of Subgroup Response Rate on survey level, and Average Absolute Relative Bias (AARB), Fraction of Missing Information (FMI) on variable level. Another aim of the analysis is to use the results of the risk of nonresponse bias analysis for the design of a next online or hybrid survey using specific post-stratification procedures. The sources of benchmark information used in the analysis are several: administrative data, survey data (European Social Survey) and simulated data. The main finding of the analysis is that the pattern of the risk of nonresponse bias differs between online and hybrid surveys. Among the post-stratification procedures, the most effective was the one that included attitude variables such as general trust or auxiliary data such as the development classification of the place of residence. The analysis also draws attention to the difficulties of verifying the methodological criteria for online and hybrid surveys, which limits their reliability.


The Geography of Nonresponse: Can spatial econometric techniques improve survey weights for nonresponse?

Dr Christoph Zangger (University of Bern) - Presenting Author

Different strategies address unit nonresponse in cross-sectional and longitudinal surveys, with calibration and inverse probability weighting as some of the most common approaches (Särndal and Lundström, 2005). Moreover, it has been recognized that nonresponse varies geographically (Hansen et al., 2007). The geographic clustering of survey nonresponse has helped to identify segments of the population that are less likely to participate (Bates and Mulry, 2011; Erdman and Bates, 2017). As a consequence, researchers have included geographically aggregated measures to account for nonresponse and to construct survey weights (Kreuter et al., 2010). This paper extends this literature by building on the argument that people with similar characteristics tend to live in similar places. The resulting segregation induces a spatial correlation among characteristics which are also used to predict survey nonresponse, for example, education, and that are likely correlated with other measures in surveys. Consequently, the residuals of regressing survey response on a set of available characteristics are themselves spatially correlated, biasing estimates and predictions (Pace and LeSage, 2010).

While aggregated characteristics can pick up some of the spatial correlation, there is another, more direct approach that accounts for spatial correlation: spatial econometric models (LeSage and Pace, 2009). These models can directly incorporate other units' response status in the prediction of an individual unit's response propensity, accounting therewith for the socio-spatial interdependence induced by unobserved residential selection. Using Monte Carlo simulations, this paper demonstrates how spatial econometric models improve predicted response propensities, yield more accurate survey weights, and are thus a valid alternative to common inverse-probability weighting approaches, even if the data generating process is incorrectly specified. The results are robust across a wide variety of model specifications, including the underlying response pattern and its spatial correlates.


Prediction-based Methods for Assessing Nonresponse Bias and the Effectiveness of Nonresponse Adjustments in an International Survey

Mr Benjamin Schneider (Westat) - Presenting Author
Mr Tom Krenzke (Westat)
Dr Laura Gamble (Westat)

Our ability to assess and reduce nonresponse bias depends crucially on auxiliary variables which can provide information about respondents, nonrespondents, and their differences. In any given survey, there may be several auxiliary variables which can be used in a nonresponse analysis or adjustment, which poses analytical challenges. Some auxiliary variables may provide redundant information to one another, while others may provide different, conflicting indications of nonresponse bias. To synthesize information from several auxiliary variables, we develop a predictive modeling approach for imputing survey outcomes and using the difference in model predictions between respondents and nonrespondents as a summary of nonresponse bias apparent from the set of available auxiliary variables. We show how this approach can be used to evaluate nonresponse bias prior to nonresponse adjustments and to identify apparent bias remaining after nonresponse adjustments. Because the auxiliary variables only provide incomplete information about nonrespondents and potential bias, we use diagnostic tools from predictive modeling to develop an intuitive statistic summarizing uncertainty about nonrespondents’ outcomes and potential bias after conducting nonresponse adjustments. We demonstrate how these methods were used in nonresponse bias analyses conducted in the Program for the International Assessment of Adult Competencies.