ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 14:00 - 15:30 Room: F2 105


Electoral research & polling 1

Chair Dr Andreas Goldberg (University of Amsterdam )

Session Details

Paper Details

1. Who gets lost, and what difference it makes? Mixed modes, survey participation and nonresponse bias
Dr Andreas Goldberg (University of Amsterdam)
Professor Pascal Sciarini (University of Geneva)

While the bulk of the literature on turnout bias in post-election surveys has focused on vote overreporting, our recent work shows that voter overrepresentation among survey respondents (nonresponse bias) contributes to a greater extent than measurement error (overreporting) to the overestimation of turnout (Sciarini and Goldberg, 2016 and forthcoming). Nonresponse bias thus deserves closer attention. In the present paper, we contribute to the stream of survey research looking at the influence of mode effects on nonresponse bias (Voogt and Saris 2005, Atkeson et al. 2014).
Empirically, we rely on a unique data set of validated votes collected in the context of two post-election surveys in the canton of Geneva (in 2012 and 2015). The data set offers information on official turnout (and basic socio-demographic variables) for both sampled citizens who participated in the survey, and those who did not. Both surveys used the same sampling frame (drawn from the official vote register) and they both relied on a mixed mode design (telephone and internet with very similar response rates (about 45%)). In addition, both surveys included a short written questionnaire among non-respondents, again with very similar response rates (about 35%). However, the surveys differ from one another with respect to the relative weight of telephone and online surveys. The first survey was predominantly a telephone survey, complemented with online interviews, whereas the second was primarily an internet survey, complemented with telephone interviews.
We first address the question "who gets lost", by comparing the socio-demographic characteristics of the initial sample with those of the two sub-samples of survey respondents and respondents to the written questionnaire. For the two types of respondents, we further compare the impact of political attitudes on survey participation. We then turn to the analysis of "what difference it makes", i.e. to the analysis of nonresponse bias (voter overrepresentation). Using again the initial sample as base-line we analyze the size and the determinants of voter overrepresentation among survey respondents and respondents to the written questionnaire.
Overall, our results highlight the added value of the written questionnaire among non-respondents, which in both surveys increases the representativeness of the realized sample, reduces the nonresponse bias, and enhances the accuracy of turnout determinants.

References
Atkeson, Lonna Rae, Adams, Alex N. & R. Michael Alvarez (2014). Nonresponse and Mode Effects in Self- and Interviewer-Administered Surveys. Political Analysis 22(3): 304-320.
Sciarini, Pascal and Andreas C. Goldberg (2016). “Turnout bias in postelection surveys: Political involvement, survey participation and vote overreporting.” Journal of Survey Statistics and Methodology 4(1): 110-137.
Sciarini, Pascal and Andreas C. Goldberg (forthcoming). “Lost on the way. Nonresponse and its influence on turnout bias in post-election surveys.” International Journal of Public Opinion research.
Voogt, Robert J. J. and Willem E. Saris (2005). Mixed mode designs: Findings the balance between nonresponse bias and mode effects. Journal of Official Statistics 21(3): 367-387.


2. Analysing electoral non-response bias in electoral contexts
Mr Yamil Nares (DEFOE)
Mr René Bautista (DEFOE)
Mr Daniel Gonzalez (DEFOE)

This paper explores potential non-response effects in an electoral context. In particular, we explore and contrast differences between survey estimates and actual results at the precinct level (also referred to as “electoral section”). As is it well known, nonresponse error can lead to nonresponse bias and have profound effects on electoral results. If not well understood, nonresponse bias can lead to erroneous survey estimates and create wrong expectations in the general public. This paper is based on two studies as follows. (1) A weekly nationwide Household Survey Panels (with 13 waves of data collection) and an Election Day survey conducted during the 2012 Presidential Election in Mexico. For the weekly panel, demographic information from nonrespondents was collected by observation alone for wave 8 to wave 13. The exit poll used the same electoral sections in the first stage of the sampling process as the weekly panel. Also, the exit poll collected nonresponse data by observation (age and gender). (2) A household panel consisting of five waves was conducted during the 2015 Congressional election. Nonresponse data was collected for all waves. These studies were conducted in Mexico by DEFOE, an independent survey firm based in Mexico City. In the results, we show demographic profiles (rolled up at the electoral-section level) comparing respondents and nonrespondents. Also, a comparison of survey estimates and actual results is presented. Additionally, we study potential geographical differences to explore whether nonresponse error follows the same pattern across the country. These results could help inform decisions to better account for potential nonresponse bias in election surveys.


3. An Alternative Approach to Election Polling in the United States: The USC Dornsife / Los Angeles Times 2016 “Daybreak” Poll
Ms Jill Darling (University of Southern California Center for Economic and Social Research )
Dr Arie Kaptyen (University of Southern California Center for Economic and Social Research )
Ms Tania Gutsche (University of Southern California Center for Economic and Social Research )

This paper reports on the use of an innovative method of pre-election polling. We used probabilistic methods with an internet tracking poll to forecast the vote in the 2016 election for president of the United States between major party candidates Donald Trump and Hilary Clinton. Probabilistic polling (Delavande & Manski, 2010) provides an alternative to traditional polling methods. This approach asks respondents to provide a percentage likelihood of voting for each presidential candidate, as well as likelihood of voting. From July 4 to November 7, respondents who were members of the USC Center for Economic and Social Research Understanding America Study (UAS, a probability-based internet panel) answered vote questions on an assigned day once per week. Vote forecasts were estimated by calculating the ratio of vote percentage to turnout percentage for each candidate and results were presented in the form of online charts updated nightly as 7-day rolling averages. While the poll's estimate was several points off the final popular vote count, it was one of the few polls to forecast a Trump win, and detected a wave of support that many traditional polls missed. We fielded a post-election follow-up starting November 9th. This work is part of an ongoing exploration of the utility of methods that may help address problems facing the field of election polling, so we made our data and methods available to other researchers for analysis. While these methods successfully predicted the 2012 outcome (Gutsche, Kapteyn, Meijer, & Weerman, 2014; Kapteyn, Meijer, & Weerman, 2012), they were not as accurate in the volatile 2016 election. This presentation focuses on the poll’s estimates and models and what we learned about predicted vs. actual voting from the post-election survey. We will examine voting among key demographics, and consider the impact of alternative methods of modeling and forecasting using our data. We will also touch on lessons learned, potential contributions to the field of election polling, and next steps.