ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 09:00 - 10:30 Room: Q2 AUD1 CGD


Assessing the Quality of Survey Data 1

Chair Professor Jörg Blasius (University of Bonn )

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Paper Details

1. Comparing Survey Data and Registry Data: How Reliable are Measures of Wages and Educations Degrees?
Mr Peter Valet (Bielefeld University)
Ms Jule Adriaans (Bielefeld University)

Respondents’ wages and educational degrees are key indicators of social stratification research. However, measuring wages is connected to several methodological problems. For one thing wages are a sensitive topic, so respondents often refuse to provide information or might even drop out if asked about their wages. But even if they answer questions on their wages, the quality of the information provided is often biased. One reason for this might be that wages are measured in various ways (e.g., before or after taxes; and either on a yearly, monthly, or hourly basis) and that some respondents are more familiar with one way than with another. Social stratification researchers therefore increasingly rely on registry data—such as tax data or data on social security contributions—which provide official information on peoples’ wages. The downside of using registry data is that information on other key indicators of social stratification is sparse and often questionable. One reason for this might be that the employer usually provides such information on the employee. Therefore, the reliability of registry data on employees crucially depends on whether such information is available to the employer—e.g., if a certain educational degree is requisite to fill a vacant position.

To date evidence is scarce on how to assess the quality of survey data on wages and whether some respondents provide more reliable information on their wages than others. Beyond that there is also little evidence on the quality of registry data on other core indicators of social stratification such as educational degrees.

In this study, we investigate systematic differences between survey data and registry data by comparing measures of wages and educational degrees in a linked data set. We use data from the German employee survey “Legitimation of Inequality Over the Life-Span” (LINOS) conducted in winter 2012/13 and registry data of the Federal Employment Agency. Due to the sampling on German social security records we were able to link the LINOS survey data to registry data on employees’ individual employment histories (IEB). The IEB data provides reliable registry information on employees’ wages and also includes additional information on their educational degrees. By comparing the information on wages provided in the survey to the more reliable registry data, we are able to assess the quality of survey data on wages Furthermore, we analyze the reliability of registry data on educational degrees by in turn comparing the registry information on educational degrees with the detailed information survey respondents provided on their educational backgrounds.

Our first results reveal that more than one third of the survey respondents considerably under- or overestimated the actual amount of their gross monthly earnings. The reliability of wage information is related to employees’ socio-economic backgrounds, their monthly earnings, whether they negotiated their earnings, and their gender. The reliability of registry data on educational degrees is foremost related to company size and industry.


2. Quality metric for ongoing health surveys
Dr Margo Barr (University of Wollongong)
Professor David Steel (University of Wollongong)

The conceptual framework describing statistical error properties of sample survey statistics is total survey error. This covers both sampling error and non-sampling error. The two dimensions, representation and measurement from a total survey error framework, concentrated on the accuracy of the information collected, i.e. the closeness between the estimated and the true (unknown) values. It did not include quality issues around the use of the statistics themselves.

This paper examines how total survey error, with the addition of a third dimension to account for the use of the statistics, the impact dimension, assists in interpreting the quality of ongoing health surveys. It includes the development of a quality metric that incorporates the three quality dimensions and its application to an ongoing population health survey in Australia. The ongoing survey scored 5/6 points for the representative dimension, 5/8 for the measurement dimension, 3/4 for the impact dimension and 13/18 overall (72%). The paper then explores if this approach assists in determining the quality of ongoing health survey and its possible use into the future.


3. Quality Controls and Their Application to Substantive Analyses of Data from International Survey Projects
Dr Irina Tomescu-Dubrow (Polish Academy of Sciences and CONSIRT)
Professor Kazimeirz Slomczynski (The Ohio State University, the Polish Academy of Sciences and CONSIRT)

On the basis of Total Survey Error (TSE), Total Survey Quality (TSQ) and Total Quality Management (TQM) frameworks, we propose to take into account “methodological variability” in the source survey via metadata composed of three sets of control variables: (1) variables that describe the quality of data documentation dealing with sampling design, preparing the questionnaire, pre-testing and fieldwork control, (2) variables that capture inconsistencies between survey documentation and records in the computer data files, and (3) variables describing errors or biases in data records in the computer file such as frequencies of erroneous respondents’ IDs, non-unique records (duplicates), missing data across socio-demographics, and erroneous weights. We applied this schema of control variables to 1,721 national surveys in 22 international projects, including the World Value Survey, International Social Survey Programme, European Social Survey, Eurobarometer and its regional renditions over the world, as well as some more specialized studies on political attitudes and behavior. We show that the distribution of control variables is not random; they correlate – to varying but significant degree – with important substantive variables, such as attending demonstrations or trust in public institutions. We propose three ways in which scholars can employ these metadata in substantive analyses: (1) As “filters” for selecting those datasets that best fit their requirements of data quality; (2) As a weighting index of source data quality, to be applied when analyzing data harmonized ex-post; and (3) As control variables, to partial out the effect of variation in source data quality on variables of substantive interest. Illustrative empirical analyses reveal strengths and weaknesses of these approaches.


4. Assessing and interpreting discontinuities in the transition to an integrated survey
Mr Paul Smith (S3RI, University of Southampton)
Professor Nikos Tzavidis (University of Southampton)
Dr Timo Schmid (Freie Universitat Berlin)
Professor Jan van den Brakel (University of Maastricht)
Mr Steven Marshall (Welsh Government)

The Welsh Government has replaced five existing surveys with a new National Survey, which includes a longer questionnaire covering many of the topics originally in the component surveys. The new survey began in 2016 and will run until at least 2021.

Redesigning a survey generally affects the non-sampling errors and therefore has a systematic effect on the survey estimates. These kinds of systematic differences are called discontinuities. Separating real changes from discontinuities due to the redesign in a survey transition is important to maintain uninterrupted time series of estimates. Part of the process for transition to the new surveys is to provide users with information on the likely effect of the change to the new survey on the existing estimates, and to help them to use this information to interpret the changes where they are of substantive importance for policy purposes. To provide early information on these effects, a large-scale pilot survey was implemented in 2015 alongside (for some surveys) or shortly after (for others) the last instances of the original surveys. This paper discusses work that was carried out to assess discontinuities, and how users should interpret the discontinuities.

The discontinuities may be assessed in three main ways – at a national level using a direct estimator; at a domain level using a direct estimator; and at domain level using indirect estimation to deal with problems due to small sample sizes. All three strategies were used in this instance. The strategies will be described, and some examples of each will be shown to demonstrate the differences between them in particular cases. We will formulate an outline best practice for general use, although specific analysis may be needed in any particular case to help to make a decision on what is the best approach.

We then use the estimated discontinuities and their standard errors to set out the possibilities for adjustment of the series, and how users should account for this information when using the results of the new survey. We also consider alternative approaches for estimating and adjusting for discontinuities as further information from the new survey becomes available, and make an initial assessment of how much information will be needed to get the most accurate assessment of the evolution of the series at the time of the transition to the new design. We consider how the different elements of quality, particularly the accuracy of the estimation of levels and changes and timeliness, are traded off in making appropriate assessments of the overall survey quality.