ESRA logo

ESRA 2021 full program

Friday 2 July Friday 9 July Friday 16 July Friday 23 July

Short courses on Thursdays



Assessing and improving measurement instrument comparability when combining data from different surveys

Session Organiser Dr Ranjit K. Singh (GESIS - Leibniz Institute for the Social Sciences)
TimeFriday 16 July, 13:15 - 14:45

Survey programs in the social sciences tend to use different measurement instruments for the same concepts. Even commonly surveyed concepts are usually measured with different wording, different response options or even a different number of items in almost every larger survey in a country (and across countries). This often limits the comparability between surveys and across countries and cultures. It also proves to be a challenge to researchers who want to combine data from different surveys. Consequently, the focus of this session is how to assess and improve measurement instrument comparability across surveys. The session should be of interest to researchers who want to combine existing data from different surveys, as well as data producers, who may benefit if they want to maintain their timeseries or improve synergies with other survey programs. The session will offer a balanced mix of methodological research and practical insights and inspirations from harmonization practitioners.

Keywords: Harmonization, survey data, comparability, measurement quality

Measuring divorce risk with pooled survey data – A comparison between prospectively and retrospectively collected marriage biographies

Dr Sonja Schulz (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Ms Lisa Schmid (GESIS - Leibniz Institute for the Social Sciences)
Ms Anna-Carolina Haensch (GESIS - Leibniz Institute for the Social Sciences)
Ms Antonia May (GESIS - Leibniz Institute for the Social Sciences)

The analysis of social change in divorce behavior requires data that cover a long historical time period and a very high number of marriages in order to differentiate between different marriage cohorts. With respect to Germany, this is only possible with harmonized and pooled survey data containing date information on past and current marriages of respondents (marriage biographies). In general, data on union formation and separation can be collected prospectively, that is, repeatedly asking individuals about their partnerships (in a panel study), or retrospectively, that is, asking individuals about their past partnerships (e.g. in a cross-sectional study or in the first wave of a panel study). Both survey modes of marriage biographies can produce biases regarding divorce risks. Prospectively collected data are likely to overestimate union stability as individuals in unstable partnerships are more likely to leave ongoing survey programs compared to individuals in stable unions. This leads to disproportional panel attrition of individuals in unstable partnerships. Retrospective collections of partnerships at times restrict the number of partnerships a respondent can report within the questionnaire. If individuals have limited opportunities to report past relationships in the questionnaire, they cannot report all the break-ups they actually experienced. This also results in an overestimation of partnership stability. However, to produce accurate estimates regarding social change in divorce behaviour across different data sets, it is important to rely on unbiased data regarding the outcome variable. Therefore, we assess which survey characteristics are associated with respondents’ risk of separation. This study compares the risk of union dissolution between different German surveys, which were harmonized and pooled in the DFG-project “Harmonizing and Synthesizing Partnership Histories from Different Research Data Infrastructures” (HaSpaD). To explain differences in separation risks between different surveys, various survey characteristics such as the retro-or prospective collection of marital biographies, the number of retrospectively collected partnerships, and the main survey topic are considered. The effects of different survey characteristics are discussed.


Measuring and harmonizing socio-demographic variables across large-scale studies in Germany – an overview

Dr Verena Ortmanns (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Silke Schneider (GESIS - Leibniz Institute for the Social Sciences)

Researchers increasingly rely on data from different surveys. Combining surveys allows us more robust conclusions, extending time series, increasing the resolution for georeferencing, examining smaller subpopulations, or linking surveys covering different populations for international comparisons. However, combining different surveys is challenging due to the use of various measurement instruments for identical concepts. This is especially true for socio-demographic variables (such as age, gender, education, and occupation) which are collected in (almost) all social science survey programs and widely used as control (or ‘background’) variables in multivariate models. Measurement instruments on socio-demographic variables often are similar across surveys but vary in important details. The problem of such mismatched measurement instruments leads to extra work, information loss, biases, or even spurious findings.
This issue is tackled within the context of the “Consortium for the Social, Behavioural, Educational, and Economic Sciences (KonsortSWD)” / National Research Data Infrastructure (NFDI). One aim of this initiative is to develop a systematic overview of measurement instruments for socio-demographic variables used in large-scale studies in Germany. The presentation gives an overview of the measurement instruments implemented in the German General Social Survey (ALLBUS), the German Socio-Economic Panel (SOEP), and the German National Educational Panel Study (NEPS). We will look at several socio-demographic variables and discuss how similar or different measurement instruments are. We expect to identify rather similar measurement instruments for age, gender, citizenship but the instruments for measuring respondents’ education, occupation, and especially income will probably differ to a larger extent between the surveys. In a second step, the presentation has a closer look at single examples and will discuss potentials for harmonizing some of these variables to increase comparability across these surveys. In some cases, the variables can be ex-post harmonized based on the existing instruments and data without losing information. Other variables, however, can only be ex-post harmonized by discarding information and thus reducing the level of detail. In such cases, ex-ante harmonization, that is, a change in the way the variables are measured, may be preferable.
In sum, the presentation aims to explore the different trade-offs necessary in harmonizing socio-demographic variables. Different harmonization challenges can be solved by data users themselves, by harmonization experts, or by the data producers. And some only by coordinated efforts of all three groups combined.


Harmonizing political interest data with equating

Dr Ranjit K. Singh (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author

Download presentation

If we want to compare or combine survey data from different sources, we often face the challenge that the same concepts were measured with different instruments (e.g., with differently worded questions or different response options). In such cases, comparability of the existing data must be improved via a process of ex-post harmonization. In my talk, I will explore this issue with two measures of political interest, which mainly differ in the number of response options (four vs. five) and the response label wording.
With data from an international survey program (ISSP), three German survey programs (ALLBUS, GLES, and the GESIS Panel), as well as two methodological online experiments I will demonstrate, that: (1) Even minor instrument differences can result in substantial comparability issues, such as spurious or biased mean differences and correlations. (2) These comparability problems can stem from different sources, such as different scale points or different scale labels. (3) Aligning scales with linear stretching is insufficient. Linear stretching sets the maximum and minimum scale points as equal and stretches all scores in between with equal distances. This is problematic, because it only takes the number of scale points into account but ignores response distributions and scale labels. (4) Instead, observed score equating offers considerably better results. Equating in general is a family of methods used in psychometric diagnostics to make the results of different tests comparable. Observed score equating is a subset of those methods that can be applied to survey instruments with only one question (as opposed to multi-item instruments).
The talk also explores a crucial practical issue in using equating to harmonize survey instruments. To use equating for single-question instruments, we require either data where respondents answered both instruments or data for each instrument drawn randomly from the same population. The latter might mean experimental data (e.g., a split-half experiment) or data from two surveys with probabilistic samples of the same population in roughly the same year. With the variety of data sources used, I will show that both data collected from the same respondents as well as data in two random samples of the same population can be used to establish comparability. I will also show that equating with data collected in different survey modes and with a time difference of a year is still substantially superior to the commonly used linear stretching method. This demonstrates that finding existing data for equating is practical for many harmonization projects. And even if additional experimental data must be collected, non-probabilistic online samples are acceptable.


Assessing measurement instruments across surveys in the Survey Data Recycling (SDR) framework: the use of ex-post harmonization controls

Professor Kazimierz M. Slomczynski (IFiS PAN and CONSIRT) - Presenting Author
Dr Irina Tomescu-Dubrow (IFiS PAN and CONSIRT)
Dr Ilona Wysmułek (IFiS PAN and CONSIRT)

This paper analyzes measurement instruments of institutional trust in cross-national survey datasets, focusing on the situation of data harmonized ex-post. Although reliance on ex-post harmonization survey methods is on the rise, we still know little about whether and how methodological variability between source surveys affects substantive results obtained on the harmonized dataset. To examine this problem, we focus on institutional trust, measured by trust in parliament, the legal system, and political parties, and apply the Survey Data Recycling (SDR) framework. A main idea in SDR is that researchers can establish a standard formulation of the question and a standard scale for answers and then control for deviations from these standards across surveys. In this paper, we consider (a) semantic controls dealing with the object (O), attribute (A), and the criterion (C) of the question, and (b) properties of the scales such as length (L), direction (D), and polarity (P). Preliminary analyses show that both sets of control variables, (O, A, C) and (L, C, P), are important for assessing the relationship of institutional trust with its determinants (gender, age, and education). Analyses are based on the data from over 3,000 national surveys from more than 100 countries.


In harmony: Exploring the feasibility of ex-post harmonisation of European Social Survey and European Values Study items

Ms Maineri Angelica (Tilburg University) - Presenting Author
Dr Eva Aizpurua (City University of London)
Professor Rory Fitzgerald (City University of London)
Dr Vera Lomazzi (GESIS - Leibniz Institute for the Social Sciences)
Dr Luijkx Ruud (Tilburg University)

Download presentation

The European Social Survey (ESS) and the European Values Study (EVS) are large, cross-national social surveys that collect data in most European countries. The former is biannual (2002-present) while the latter goes to the field every nine years (1981-present). As part of the ESS-SUSTAIN-2 project, both groups are exploring the possibility of collecting EVS data as part of the ESS infrastructure. The suggested strategy consists of bridging compatible measures when possible and potentially designing a 30-item module with EVS core questions not already covered in the ESS questionnaire. This presentation summarises the work undertaken to assess the comparability of 25 substantive items, previously identified as potentially compatible between the two surveys. The empirical comparison is based on data from ESS Round 9 (2018-2019) and EVS Wave 5 (2017-2020), limiting the analysis to countries that conducted the fieldwork within a similar time frame and used comparable sampling approaches. ESS and EVS measures are compared based on an assessment of their validity and, when possible, their reliability. When evidence of validity and reliability is obtained, distributions are also compared. Several analytical methods, ranging from chi-square and t-tests to compare distributions to Multi Group Confirmatory Factor Analysis to assess measurement invariance of multi-dimensional concepts, are employed. This comparison was conducted to investigate the conceptual overlap between the surveys and understand if, in principle, it would be possible to bridge data collected using the ESS infrastructure with the EVS time series. This is an essential step to inform decisions about the future cooperation between the two surveys. The findings of this methodological work will be of interest for researchers interested in survey harmonisation and cross-national surveys, and for data users interested in pooling data from different sources.