ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     




Tuesday 14th July, 14:00 - 15:30 Room: O-201


Assessing the Quality of Survey Data 2

Convenor Professor Joerg Blasius (University of Bonn )

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Paper Details

1. Design effects in household wealth surveys: results from the Eurosystem’s Household Finance and Consumption Survey
Mr Guillaume Osier (European Central Bank)
Mr Pierre Lamarche (European Central Bank)

The Household Finance and Consumption Survey is an initiative to collect micro-data on household income and wealth among the Euro Area countries. The sampling designs used may involve components such as stratification, clustering, oversampling, weighting adjustments for unit non-response and calibration. In this presentation, we intend to measure the effect of these features on accuracy. We rely on the concept of design effect, which measures the gain or loss in sampling precision caused by using a complex design. The design effect factor will be decomposed to account for the specific effects of unequal weights, imputation and other components.


2. Survey Errors in Random Route Samples
Mr Johannes Bauer (Ludiwig-Maximilians Universität München)

In a preceding study random route instructions were tested for their theoretical property to select respondents with equal probability. All routes failed to achieve this necessary assumption. This survey reproduces and extends the analysis to deviations in variables. Registration office data are applied to verify the negative impact of biased household selections on survey results. All tested random route instructions lead to biased expected values in multiple variables. The strongest errors were found in variables that were related to the spatial location of a household. The talk closes with a proposal how to improve random route samples.



3. Interviewer Effects in Real and Falsified Interviews - Results from a Large Scale Experiment
Professor Peter Winker (Justus-Liebig-University Giessen)
Mr Karl-wilhelm Kruse (Justus-Liebig-University Giessen)
Dr Natalja Menold (GESIS – Leibniz Institute for the Social Sciences, Mannheim)

Interviewers influence data quality in surveys unintentionally or intentionally. We analyse influences of interviewers’ characteristics and payment schemes on falsified and real data. The analysis is based on data of a large scale experimental study, which includes both real and falsified interviews. For this experimental study, the interviewers’ payment was subject to two different conditions, namely payment per completed interview and payment per hour. The impact of payment, gender and other interviewers' characteristics is analysed. Empirical results are presented, and a conclusion is drawn regarding the impact of payment scheme on survey data quality.


4. Measurement Error in Discontinuous Online Survey Panels: Panel Conditioning and Data Quality
Professor Lonna Atkeson (University of New Mexico)
Mr Alex Adams (University of New Mexico)
Professor Jeffrey Karp (University of Essex)

We consider two separate, but equally problematic measurement concerns that may arise with the use of discontinuous online panel respondents: 1) whether or not repeatedly asking respondents their opinions changes them; and 2) that the high frequency of surveying the same individuals for extrinsic rewards may reduce the overall quality of respondent data. We find evidence of panel conditioning in the form of decreased survey duration times, increased political sophistication and non-differentiation of responses. Furthermore, we provide evidence that online survey panel respondents are less likely to optimize their responses than respondents in traditional probability modes.


5. Cohen’s kappa and its generalised nieces and nephews: time to say goodbye?
Dr Jarl Kampen (WUR)
Dr Hilde Tobi (WUR)
Mr Jurian Meijering (WUR)

Surveys amongst experts (e.g. Delphi studies) often report the level of agreement amongst respondents by means of Cohen’s Kappa as an indicator of data quality . The three reasons for the use of kappa or its generalisations are 1) the approach respects the qualitative measurement scale of involved variables, 2) it quantifies agreement rather than associations and 3) it adjusts for agreement that occurs solely by chance. In practice the latter reason for using Kappa yields contra-intuitive results. This paper investigates the issue of Kappa being unable to distinguish object-specific characteristics from rater-specific characteristics.