ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 16:00 - 17:30 Room: F2 105


Electoral research & polling 2

Chair Professor Michael Schober (New School for Social Research )

Session Details

Paper Details

1. Lies, Damn Lies, and Exit Polls: Minority Sub-samples and the Dangers of Design
Professor Gary Segura (UCLA)

The National Exit Poll, conducted by Edison Research for the major television networks in the United States, have a long and troubled history. Nevertheless, their monopoly, accompanied by the desire to tell the story of the election, confers upon these polls a status of "truth" that goes on to entirely structure post-election narratives and, by extension, shape how political parties respond to the election, including internal structural decisions and the policy agendas of the new administration.

Exit Poll results from the November, 2016, US election offered dubious estimates of minority presidential vote. Specifically, despite his nationalist rhetoric and racialized campaign, Latinos, Asian Americans, and African Americans were reported to support Trump at levels higher than previous GOP nominees and at levels wildly inconsistent with pre-election polling. This was particularly true of Latinos, who were targeted from the first moments of the election by the Trump campaign. Exit polls suggested Trump outperforming Romney (the GOP nominee in 2012) and collecting 29%. These claims are difficult to sustain, in light of pre-election polling and longer term trends in Latino two-party vote.

These results, we suggest, are wildly off the mark. In this effort, we compare exit poll estimates with the Latino Decisions Election Eve Survey. The LD Survey used a mixed mode interviewing process and sampled in an unclustered manner from the voter file to produce two-party vote estimates among extreme-high-propensity voters. Those estimates are significantly different from the exit polls and show a much poorer performance by Trump among Latinos--only 18%.

The sample properties of the exit polls will be shown to produce demographic skews the vary greatly from CPS estimates and always in the same direction over time. That is, the Exit Polls are structurally ill-suited to sub-group estimates because their sampling method is more likely to yield minority respondents with atypically higher income and education when compared to the Census. The paper will include historical comparisons between past exit poll data and the Current Population Surveys. The 2016 results will be compared in aggregate to the CPS November 2016 supplement to demonstrate that this bias continued into this last election cycle. The result, I argue, was a substantial misunderstanding of the preferences of the Latino electorate in this cycle. Ecological Inference analysis on actual precinct level vote will be used to adjudicate between the Exit Polls and the LD surveys and will demonstrate the significant bias introduced by exit poll sampling design.


2. Communicating Uncertainty in Data Visualizations
Professor Michael Schober (New School for Social Research)
Professor Aaron Hill (Parsons School of Design)

Popular and media discussions about seeming failures of election polling results—as well as commissions investigating what has gone wrong—have highlighted the difficulty of communicating skepticism and questions about data: margins of error, the potential consequences of nonresponse bias and other issues of representation, generalizability from nonprobability samples and data science methods, and a host of other kinds of uncertainty inherent in scientific survey data collection. The evidence shows that consumers of data can have an inaccurate understanding of visual representations of uncertainty (e.g., error bars in bar graphs, multivariate confidence intervals) and that textual qualifiers can easily be ignored or misinterpreted. We argue that many of the most frequently used visualizations can (often inadvertently) make findings look more certain than they actually are, which can allow public and media diffusion of oversimplified interpretations. We also argue that more kinds of potential measurement error need to be communicated about than are currently visualized.

In this paper we present a range of possible kinds of uncertainty that deserve new visualization treatments, from the most common kinds of statistical error to the quantifiable aspects of Total Survey Error (alternate possible interpretations of words in survey questions across a population, potential coverage error, etc.) to replicability of scientific results to the increasing use of data science and predictive models to interpret and communicate data. We also outline a set of critical considerations about the potential audiences for a visualization, the range of possible communicative and rhetorical goals a designer might have (from visually and accurately representing uncertainty to creating new interactive visualizations that entice users to explore legitimate alternative interpretations of the data), ways of assessing the success of a visualization, and potentially unintended consequences of visualizations gone wrong. To support our proposals we present examples that attempt to create new visualizations of uncertainty in a range of data sets—survey data, psychology laboratory studies, administrative records data—from an interdisciplinary Fall 2016 graduate course at The New School.