ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Tuesday 16th July 2013, 14:00 - 15:30, Room: Big hall

Assessing the Quality of Survey Data 1

Convenor Professor Jörg Blasius (University of Bonn)

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the "substantive" solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.


Paper Details

1. Quality Standards for Survey Data Collection in the European Social Survey (ESS).

Mr Joost Kappelhof (The Netherlands institute for Social Research)
Dr Ineke Stoop (The Netherlands institute for Social Research)
Mrs Verena Halbherr (GESIS)

An important step in assessing data quality is to evaluate quality standards implemented in data collection. Ideally methodologically induced variation is minimized by adhering to specifications for data collection that pursue high quality and optimal comparability. Data collection standards in the ESS aim at maximizing the representativity of the final sample, minimizing measurement error and maximizing comparability across countries. Standards related to representativity include, among others, a standard definition of the target population, a target response rate and a minimum effective sample size. Standards related to measurement prescribe a maximum assignment size for interviewers and strict rules on translation and testing. Standards focused at optimal comparability are, e.g. the fieldwork period, the survey mode, and the identical structure of the questionnaire.

In cross-national surveys identical standards can have a different meaning and impact. A response rate of 70% may be high in some countries and low in others. Interviewers in some countries will be better trained and have more experience than in others, etc.

And finally, standards cannot be seen in isolation. A very long fieldwork period may result in a higher response rate, but will reduce cross-national comparability. Deploying a small number of very experienced, high quality interviewers may also increase response rates, but could result in more interviewer effects and higher costs..

The presentation focuses on data collection standards in the ESS, gives examples of the impact of these standards on data quality, discusses cross-national comparability and the trade-offs between different quality aspects.


2. The consistency of straightlining and speeding over time and personality correlates

Dr Natalia Kieruj (CentERdata)
Ms Corrie Vis (CentERdata)

Response style behavior is considered a major threat in survey research, since it can seriously distort answering patterns. By getting more insight into response bias it becomes easier to limit the occurrence of this type of problem or (if it cannot be prevented from occurring) to find a solution to clean data after collection.

In this research we focus on two types of response style, namely straightlining and speeding. Straightlining is defined as selecting the exact same response option for all items when items with rating scales are presented in grids (Rossmann, 2010). Speeding is defined as completing the survey is an extremely short time (Rossmann, 2010). Both speeding and straightlining can be considered forms of satisficing where respondents do just enough to satisfy the survey request, but no more (Krosnick, 2000).

The central research question is whether these types of response style are the result of external factors (e.g. dependent on method and test conditions) or if they are more likely to be the result of internal factors (e.g. personality or socio-demographic characteristics).

Making use of the LISS household panel of CentERdata (which receives surveys on a monthly basis) we checked if straightlining and speeding patterns were present over the course of six months. Subsequently, we examined if speeders and straightliners share certain personality traits that can thus be related to these response styles. Also, we investigated if straightlining and speeding were content related and if the use of these response styles was consistent over time.


3. Identifying and Mitigating Satisficing in Web Surveys: Some Experimental Evidence

Mr Joss Rossmann (GESIS - Leibniz Institute for the Social Sciences)

Satisficing behavior is a widespread hazard in Web surveys because interview supervision is limited in absence of a human interviewer. Therefore, it is important to devise methods which help to identify and to mitigate satisficing. The paper examines whether innovative questionnaire design can be an efficient means to detect satisficing and to reduce measurement error resulting from non-substantial answers, non-differentiation in matrix questions, and speeding. It analyzes to what extent these types of satisficing can be minimized by using three tools suggested in recent research. First, several studies use prompts to reduce the incidence of non-substantial answers. Second, some authors propose alternative designs for matrix questions (so-called scrolling matrix questions) to mitigate response non-differentiation. Third, control questions (or instructional manipulation checks) are intended to identify inattentive respondents.
The statistical analyses rely on data from two Web surveys of respondents from a probability-based and a non-probability online panel, respectively. Each of the design innovations is randomly assigned to half of the sample while the other half of the sample acts as the control group. The results of the experimental manipulations are presented and a multivariate regression model for satisficing behavior is estimated to test whether the design innovations contribute to the explanation of satisficing if we control for respondent characteristics which are predictive of satisficing. The paper concludes with an assessment of the potentials of the three tools to increase data quality and a discussion of their advantages and pitfalls.


4. Can we Trust Survey Data? The Case of PISA

Professor Jörg Blasius (University of Bonn, Department of Political Science and Sociology)
Professor Victor Thiessen (Dalhousie University, Halifax)

Using the 2009 PISA data, the quality of the reports from principals of the participating schools was examined cross-nationally. Two measures of data quality were employed in addition to unit response rate and item non-response: 1) the frequency of providing the same (undifferentiated) response to all items in several domains, and 2) the number of times an identical response pattern occurred across 184 variables of the questionnaires within each country. A total lack of variation in the responses of principals constitutes an extreme form of response simplification, while multiple instances of identical response patterns jeopardizes the veracity of the data. Our analyses document several patterns. First, the response rate for principals is admirably high and item non-response is generally not problematic, although both indicators vary substantially by country. Second, both response rate and item non-response are relatively independent of both of our main indicators of data quality. Third, extreme response simplification characterizes the behavior of substantial numbers of respondents in numerous countries. Finally, strong evidence of data fabrication (through "copy and paste" procedures) was discovered in several countries, notably in Italy, Slovenia and the United Arab Republic. We conclude that screening data for its quality is an essential prerequisite even in projects known for producing superior data prior if cross-national comparisons are to be made.