ESRA logo

ESRA 2019 glance program


Assessing the Quality of Survey Data 3

Session Organiser Professor Jörg Blasius (University of Bonn)
TimeWednesday 17th July, 09:00 - 10:30
Room D02

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Keywords: Quality of data, task simplification, response styles, satisficing

The Demise of Response Rates?: Theoretical Versus Empirical Indicators of Sample

Professor Randall Thomas (Ipsos) - Presenting Author
Dr Frances Barlas (Ipsos)

Early on, survey researchers recognized that randomly selected samples could represent populations with relatively high fidelity. Further, higher response rates were believed to reflect samples more representative of the intended population, and would indicate that the resulting data is of higher quality. Therefore, for randomly-selected samples, the use of ‘response rate’ became shorthand for a sample’s quality. However, regardless of survey mode, there have been significant declines in response rates for probability-based surveys over the last twenty years. As a result, many authors have pointed out that a survey’s response rate may have little bearing on the study’s data validity, especially if non-response is random. With the combination of significantly lower response rates for probability-based samples and the dramatic rise of non-probability opt-in samples where response rates are not calculable, we face a juncture where new, easily calculable signals of ‘sample quality’ are needed. We focus in this paper on developing an empirically-based quantitative representation of data quality – whereby every sample could be evaluated against high quality benchmarks with the resulting values combined to provide a sample quality metric. We review a series of four different large-scale studies employing both probability and non-probability samples to evaluate the average divergence between sample results and a wide variety of national benchmarks. We discuss the use of benchmarks that could usefully form a standard set of items for a sample quality index independent of sampling or weighting factors. The use of such an index can be useful in quantifying sample quality from sample selection to survey participation to post-survey adjustment. We discuss further how this empirical indicator can provide information concerning the relative utility of different samples when these samples are used in combination and in mixed-modes to provide survey estimates, which is occurring with increasing frequency.


Improving the Quality of Survey Data Collection – Measuring Social Desirability on the NatCen Panel

Ms Marta Mezzanzanica (National Centre for Social Research) - Presenting Author
Miss Ruxandra Comanaru (NatCen Social Research)
Mr Curtis Jessop (NatCen Social Research)

Survey findings are based on the self-reported attitudes and behaviours of people taking part. If questions relate to perceived social norms, reported attitudes and behaviours may be biased in the direction of the social norm, though this may vary by context and sub-group (Jann, 2015). A challenge for survey researchers is to minimise and account for bias caused by socially desirable responding on survey results.
The purpose of the present study is to explore if and how a measure of the extent to which an individual’s answers are likely to be affected by social norms can be used on general population surveys to improve the quality of survey data.
The study tests a twenty-item social desirability scale on the NatCen panel, a probability-based sequential mixed-mode panel in Great Britain. The scale, based on Paulhus’ (1994) Balanced Inventory of Desirable Responding scale (BIDR - v6), was refined for use in survey research. This involved use of a five-point Likert scale with all the values labelled (to minimise differences across modes), use of a likelihood rather than agreement scale (to determine the probability of adopting a social desirable behaviour), re-wording of outdated items (to ensure clarity) and randomisation (to minimise order effects).
The study looks at how the estimates of susceptibility to social desirability bias from our proposed scale compare with those in the literature, and the extent to which it could be shortened to make it more appropriate for use in survey research. It also looks at the extent to which answers to the social desirability scale are associated with answers to a bank of twelve attitudinal questions, to explore whether the scale may be useful for identifying questions at risk of social desirability bias and measuring its effect on answer patterns. Further insight is gained through the analysis of response latency.


The Problem of High Frequency Counts in Official Statistics – Trading Bias Against Volatility in Reporting.

Professor Brian Francis (Lancaster University) - Presenting Author

This talk asserts that bias in the reporting of official statistics is often taken less seriously than precision – that reduction of year to year volatility is prioritised over any concerns about bias.
We take as an example the Crime Survey of England and Wales although it affect many other government surveys. The survey measures crime, or victimisations, and this is carried out through a set of victim forms. A victim form can either record a one-off crime, or it can record a series victimisation, which is a repeated victimisation of the same type and severity by the same perpetrator. In this case, participants are asked to answer the question – how many times did this happen to you? Thus, a response of 26 for a series domestic violence offence would indicate that 26 different crimes took place, once every two weeks -not uncommon in domestic violence.
Counting all repeated crimes however introduces volatility into the year -to year series, and so statistical organisations have capped the high frequency count at some low value. The UK Office of National Statistics currently uses a cap of five victimisations over a year (soon to change to 12 for domestic violence)– so a figure of 26 or 52 would be capped at 5. This method ,a form of winsorisation, increases precision but introduces considerable undercount bias.
Precision is prioritised for political expediency as it is hard to explain big changes in crime from one year to another- however the absolute estimate of number of crimes is heavily biased, with a large number of crimes ignored. Within Crime Survey, no estimate of the size of bias is provided, even though the UK’s statistical code of practice quality principle requires this.
This talk estimates the degree of bias such capping introduces and discusses possible solutions which reduce bias while keeping precision at an acceptable level.


The Impact of Response Styles on Parent-Child Similarity in Anti-Immigrant Attitudes

Dr Cecil Meeusen (Erasmus University Rotterdam ) - Presenting Author
Dr Caroline Vandenplas (KU Leuven)

Parents are found to have a considerable impact on their children’s feelings of prejudice toward immigrants through the system of intergenerational transmission of attitudes. Directly or indirectly, parents can steer the way their children think about ethnic diversity and intergroup relations. Usually intergenerational similarity is empirically assessed by means of correlational analysis (relative similarity) and percentage agreement on scale scores (absolute similarity). Current research overlooks one potential confounder of parent-child attitudinal correspondence: acquiescence and extreme response styles. These types of response bias might have substantial consequences for research on intergenerational similarity as this field does not take in any way into account that parents and children might also resemble each other as a consequence of a methodological artefact: parents and children might have the same response style behavior, i.e., they may correspond in the way they answer survey question, irrespective of its content.
The purpose of this paper is threefold. First, we want to assess which type of adolescents show more or less response style behavior. Second, we want to quantify to what extent parents and children correspond in response style behavior. Third, we want to investigate what the effect of (intergenerational similarity) in response bias is for the current state of the art regarding parent-child similarity in social and political attitudes. We use data from the Parent-Child Socialization Study, a two-wave random probability sample of mothers, fathers, and their children in Flanders, the Dutch-speaking part of Belgium (N > 3000).