ESRA 2019 Draft Programme at a Glance


Assessing the Quality of Survey Data 3

Session Organiser Professor Jörg Blasius (University of Bonn)
TimeWednesday 17th July, 09:00 - 10:30
Room D02

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Keywords: Quality of data, task simplification, response styles, satisficing

The Demise of Response Rates?: Theoretical Versus Empirical Indicators of Sample

Professor Randall Thomas (Ipsos) - Presenting Author
Dr Frances Barlas (Ipsos)

Early on, survey researchers recognized that randomly selected samples could represent populations with relatively high fidelity. Further, higher response rates were believed to reflect samples more representative of the intended population, and would indicate that the resulting data is of higher quality. Therefore, for randomly-selected samples, the use of ‘response rate’ became shorthand for a sample’s quality. However, regardless of survey mode, there have been significant declines in response rates for probability-based surveys over the last twenty years. As a result, many authors have pointed out that a survey’s response rate may have little bearing on the study’s data validity, especially if non-response is random. With the combination of significantly lower response rates for probability-based samples and the dramatic rise of non-probability opt-in samples where response rates are not calculable, we face a juncture where new, easily calculable signals of ‘sample quality’ are needed. We focus in this paper on developing an empirically-based quantitative representation of data quality – whereby every sample could be evaluated against high quality benchmarks with the resulting values combined to provide a sample quality metric. We review a series of four different large-scale studies employing both probability and non-probability samples to evaluate the average divergence between sample results and a wide variety of national benchmarks. We discuss the use of benchmarks that could usefully form a standard set of items for a sample quality index independent of sampling or weighting factors. The use of such an index can be useful in quantifying sample quality from sample selection to survey participation to post-survey adjustment. We discuss further how this empirical indicator can provide information concerning the relative utility of different samples when these samples are used in combination and in mixed-modes to provide survey estimates, which is occurring with increasing frequency.


Improving the Quality of Survey Data Collection – Measuring Social Desirability on the NatCen Panel

Ms Marta Mezzanzanica (National Centre for Social Research) - Presenting Author
Miss Ruxandra Comanaru (NatCen Social Research)
Mr Curtis Jessop (NatCen Social Research)

Survey findings are based on the self-reported attitudes and behaviours of people taking part. If questions relate to perceived social norms, reported attitudes and behaviours may be biased in the direction of the social norm, though this may vary by context and sub-group (Jann, 2015). A challenge for survey researchers is to minimise and account for bias caused by socially desirable responding on survey results.
The purpose of the present study is to explore if and how a measure of the extent to which an individual’s answers are likely to be affected by social norms can be used on general population surveys to improve the quality of survey data.
The study tests a twenty-item social desirability scale on the NatCen panel, a probability-based sequential mixed-mode panel in Great Britain. The scale, based on Paulhus’ (1994) Balanced Inventory of Desirable Responding scale (BIDR - v6), was refined for use in survey research. This involved use of a five-point Likert scale with all the values labelled (to minimise differences across modes), use of a likelihood rather than agreement scale (to determine the probability of adopting a social desirable behaviour), re-wording of outdated items (to ensure clarity) and randomisation (to minimise order effects).
The study looks at how the estimates of susceptibility to social desirability bias from our proposed scale compare with those in the literature, and the extent to which it could be shortened to make it more appropriate for use in survey research. It also looks at the extent to which answers to the social desirability scale are associated with answers to a bank of twelve attitudinal questions, to explore whether the scale may be useful for identifying questions at risk of social desirability bias and measuring its effect on answer patterns. Further insight is gained through the analysis of response latency.


Surveying Disadvantaged Adolescents: Representation and Measurement Biases

Dr Susanne Vogl (University of Vienna) - Presenting Author

Collecting data in an online-panel survey on adolescents’ transitions after secondary school holds many challenges: I focus here on representation and measurement errors. I will present results from a mixed methods panel study conducted with 14 to 16 year-olds online in Vienna in 2018 and reflect on experiences with the recruitment effort and outcome rates when schools and school authorities are involved and guardians’ as well as adolescents’ consent is required. Due to the multiple actors in the sampling process, sample biases are inevitable. I will critically review our experiences and draw conclusions for future research.
Furthermore, with low educational attainment and more than half of the respondents having German as their second language, measurement quality is also in danger. Thus, we paid special attention to questionnaire design and pretesting. Additionally, to keep up motivation and attention, I introduced a split-ballot experiment with video-clips between thematic blocs, forced choice, and delayed display of the submit button. I examine the effect of these treatment conditions on duration, break-offs, item-nonresponse and response patterns.
The aim of the contribution is, to discuss practical requirements and problems surveying disadvantaged adolescents as one hard-to-reach population and showcase strategies taken in recruitment, questionnaire design and survey techniques, their implications and effects. The lessons learnt can promote methodological discussion generally and reseraching disadvantaged adolescents in particular.


The Problem of High Frequency Counts in Official Statistics – Trading Bias Against Volatility in Reporting.

Professor Brian Francis (Lancaster University) - Presenting Author

This talk asserts that bias in the reporting of official statistics is often taken less seriously than precision – that reduction of year to year volatility is prioritised over any concerns about bias.
We take as an example the Crime Survey of England and Wales although it affect many other government surveys. The survey measures crime, or victimisations, and this is carried out through a set of victim forms. A victim form can either record a one-off crime, or it can record a series victimisation, which is a repeated victimisation of the same type and severity by the same perpetrator. In this case, participants are asked to answer the question – how many times did this happen to you? Thus, a response of 26 for a series domestic violence offence would indicate that 26 different crimes took place, once every two weeks -not uncommon in domestic violence.
Counting all repeated crimes however introduces volatility into the year -to year series, and so statistical organisations have capped the high frequency count at some low value. The UK Office of National Statistics currently uses a cap of five victimisations over a year (soon to change to 12 for domestic violence)– so a figure of 26 or 52 would be capped at 5. This method ,a form of winsorisation, increases precision but introduces considerable undercount bias.
Precision is prioritised for political expediency as it is hard to explain big changes in crime from one year to another- however the absolute estimate of number of crimes is heavily biased, with a large number of crimes ignored. Within Crime Survey, no estimate of the size of bias is provided, even though the UK’s statistical code of practice quality principle requires this.
This talk estimates the degree of bias such capping introduces and discusses possible solutions which reduce bias while keeping precision at an acceptable level.


The Second FRA Survey on Discrimination and Hate Crime Against Jews

Dr Vida Beresneviciute (EU Agency for Fundamental Rights (FRA))
Dr Rossalina Latcheva (EU Agency for Fundamental Rights (FRA)) - Presenting Author

The FRA’s second Survey on discrimination and hate crime against Jews is an important evidence base, providing a wealth of information on the prevalence of antisemitism across the EU. The survey was conducted in 2018 in 13 EU Member States (Austria, Belgium, Denmark, France, Germany, Hungary, Italy, Latvia, the Netherlands, Poland, Spain, Sweden and the United Kingdom) with a total of 16,660 survey completions. The first survey was conducted in 2012 to address the lack of comparable evidence on the experiences of Jewish people in relation to antisemitism, hate crime and discrimination. The first survey filled an important gap in knowledge about the everyday experiences of Jewish people in nine EU Member States. The aim of the second study is to build on the findings of the 2012 survey and help understand how these issues have changed over time given the changing climate in Europe. The survey was delivered, on behalf of FRA, through a consortium partnership between Ipsos UK and the Institute for Jewish Policy Research (JPR).
The approach involved using an open web survey with agnostic design that was sent out to people via Jewish communal organisations, groups and media outlets and involved an extensive programme of community engagement to ensure that the survey had as wide a reach as possible. Given the opt-in online approach and the lack of comprehensive Jewish population statistics in some countries, there are limits concerning the extent to which the quality of the sample can be assessed. Looking at both surveys, the presentation will critically reflect upon methodology, comparability and weighting issues, and put a special focus on difficulties and possible solutions for problems researchers face when having to rely on non-probability sampling methods.