Conference Programme 2015

Conference floor plans and map
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     


Tuesday 14th July, 11:00 - 12:30 Room: O-201

Assessing the Quality of Survey Data 1

Convenor Professor Joerg Blasius (University of Bonn )

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Paper Details

1. Data quality in repeated surveys. Evidences from a quasi-experimental design
Professor Alessandra Decataldo (Università di Milano Bicocca)
Professor Antonio Fasanella (Sapienza Università di Roma)
Dr Andrea Amico (Sapienza Università di Roma)
Mr Giampiero D'alessandro (Sapienza Università di Roma)
Mrs Annalisa Di Benedetto (Sapienza Università di Roma)

The aim of this work is to understand if (and how) a reiterated data gathering may affect data quality in survey researches. Quality is intended as data compliance to the logic and methodological conditions required by the research objectives. Validity and reliability do not need further reflections with reference to repeated surveys, therefore this work focuses on completeness, consistence and relevance, providing conceptual and operational definitions. Also the possible influences of some respondents characteristics on these issues are explored. An empirical case study is analyzed: a quasi-experimental research, conducted to evaluate an informative campaign about chemical risks.


2. Call me maybe? Using phone numbers as indicators of survey data quality
Dr Annie Pettit (Peanut Labs)

This study evaluated whether data validity and data quality can be increased by asking responders for their phone number within a survey. As a test/control feature, we compared data quality when the question was asked 1) at the beginning or 2) at the end of the survey. Survey questions referenced a number of innocuous, embarrassing, unethical, or illegal activities, some of which were validated with third party data.


3. Processing Errors in the Cross-national Surveys
Ms Ilona Wysmulek (Institute of Philosophy and Sociology, Polish Academy of Sciences )
Mrs Olena Oleksiyenko (Institute of Philosophy and Sociology, Polish Academy of Sciences)

In the presentation we will highlight the issue of processing errors in cross-national survey research. Of all the elements of the preparation and administration of survey fieldwork, relatively little attention has been paid to processing errors, although they can cause both systematic and random errors. The Harmonization Project Democratic Values and Protest Behavior (dataharmonization.org) deals with processing errors explicitly in one of the quality assessments, by focusing on the quality of the correspondence between the documentation and data. We will present the results of preliminary analyses of processing errors and its impact on data quality.


4. How does household composition derived from census data describe or misrepresent different family types?
Dr Loïc Trabut (Institut National d'Etudes Démographiques (INED))
Professor Eva Lelièvre (Institut National d'Etudes Démographiques (INED))

The census is often the main source of information on family structures as its data describe household composition for the entire population of a national territory. Nevertheless, census forms are self-administered and family relationships are captured through a limited set of options. The objective of this presentation is to evaluate how household composition variables generated from census data best describe or mis-represent different family types. Taking advantage of the fact that the Family survey is collected simultaneously with the French census, we conduct a systematic comparison: the census household type versus a finer description given by the Family


5. Unexpectedly High Number of Duplicates in Survey Data
Mr Przemek Powałko (Polish Academy of Sciences)

In this work we report the unexpectedly high number of duplicates of responses discovered in data files of well-known international survey projects. In our analysis of cases we distinguished subsets of the following variables: respondent ids, technical (administrative) variables, observations and notes made by interviewers, respondents' age and gender, urban/rural division, household members characteristics, variables derived and constructed from source variables, and source variables coming from respondents' answers. We were excluding subsequent subsets of variables and were searching for duplicates in the remaining variables. With every step we observed the growing number of duplicates.