Conference Programme 2015

Conference floor plans and map
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     


Tuesday 14th July, 16:00 - 17:30 Room: O-201

Assessing the Quality of Survey Data 3

Convenor Professor Joerg Blasius (University of Bonn )

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Paper Details

1. How a Comprehensive Program Interface Reduces the Time Cost of Survey Data Editing
Mr Richard Windle (Board of Governors of the Federal Reserve System)

For complex surveys, one of the most effective tools for reducing survey error is interviewer comments. In the Survey of Consumer Finances (SCF), this process is extraordinarily time-consuming, requiring months of careful analysis by trained editors. To reduce this time cost, a system was designed to incorporate survey data, automatically-generated financial sheets, interviewer comments, and a series of data checks into a single, easy-to-use program called the Editor Assistant (EA). The EA was fully employed for the 2013 SCF, and is credited in large part for the six month reduction in required editing time.



2. Sampling designs of the European Social Survey during seven first rounds
Professor Seppo Laaksonen (University of Helsinki)
Professor Sabine Häder (Gesis)
Professor Siegfried Gabler (Gesis)

The ESS aims to control the sample designs used by specifying sampling guidelines to be followed in each country. The main requirements are the use of probability sampling and the achievement of a minimum effective sample. The latter is determined by gross sample size, ineligibility rate, nonresponse rate, inclusion probabilities and clustering effects.

The sampling requirements have not always been well satisfied. In this presentation we will outline and discuss key problems that have been encountered to date. We will present summary statistics on sample design parameters, highlighting trends that include increasing ineligibility rates and nonresponse rates.



3. The effect of interviewer probing on item nonresponse and measurement error in cross-national surveys: A latent variable analysis
Dr Jouni Kuha (London School of Economics)
Dr Sarah Butt (City University London)
Dr Myrsini Katsikatsou (London School of Economics)

One possible way of reducing item nonresponse in surveys is for interviewers to probe “Don’t Know” responses, encouraging respondents to give a substantive answer if possible. There is a risk, however, that too much probing will lead to measurement error. This paper examines these different possible effects using data from an experiment on the use of probing in the innovation sample of the European Social Survey in three European countries. The data are analysed using latent variable models designed to disentangle the different possible impacts of probing in a multi-item survey setting.


4. Interviewer related variance in substantive variables in the European Social Survey
Professor Geert Loosveldt (KuLeuven)
Dr Koen Beullens (Kuleuven)

In large cross national surveys such as ESS the implementation of standardized interviewing techniques and preventing interviewer effects can be considered as a major challenge. Therefore an evaluation of the interviewer related variance must be considered as an essential part of the data quality assessment in cross national research. In the first part of the paper we assess interviewer related variance for 51 substantive variables of the sixth round of ESS. The results show differences in IIC’s between countries. In the second part of the paper interviewer effects on latent variables are evaluated.


5. Interviewer Effects in the European Social Survey 2010
Professor Joerg Blasius (University of Bonn)

In this paper we discuss various possible strategies interviewers might employ to fabricate parts of their interviews, such as asking only one or two questions from a battery of items and then “generalizing” the answers to the entire set. Our guiding hypothesis is that the cross-national prevalence of such data fabrication is a direct function of the pervasiveness of the perceived corruption in each country. Applying anomie theory and rational choice theory, we argue that both expected costs and normative commitment are correlated with the perceived corruption in the country.