ESRA logo

ESRA 2021 full program

Friday 2 July Friday 9 July Friday 16 July Friday 23 July

Short courses on Thursdays



Improving measurement and sampling by additional data sources

Session Organiser Dr Felix Bader (TU Kaiserslautern)
TimeFriday 23 July, 13:15 - 14:45

Are survey responses comparable and valid when different question formats or survey modes are used? Can we improve sampling of specific target populations? Adding information from external sources can help overcoming these problems if the external information is valid.

Magdolen, Chlond and Vortisch calibrate German survey data on travelling with official airport data.

Singh and Roth present an experiment using equipercentile equating to harmonize frequency or quantity questions across surveys with different answer categories.

Kostadintcheva and Allum experimentally compare the measurement of sensitive questions across self-interview modes online or offline within the UK Understanding Society Study.

Landrock and Aßmann use information on street sections as screening criteria to increase the recruitment of low-education participants for the German National Educational Panel Study.

Christie experimentally tests administrative records versus screening by interviewers to efficiently filter households with children for a boost sample in the Scottish Health Survey.

Keywords: linking data, external data, measurement comparability, informed sampling, response validity

Bias in Retrospective Surveys – Lessons Learned from Surveys on Long-Distance Travel

Ms Miriam Magdolen (Karlsruhe Institute of Technology) - Presenting Author
Dr Bastian Chlond (Karlsruhe Institute of Technology)
Professor Peter Vortisch (Karlsruhe Institute of Technology)

In the field of travel behaviour research, long-distance travel surveys face particular challenges because such events are rare and often occur irregular in the individual travel behaviour. Surveys therefore usually take place retrospectively, asking the participants to report their long-distance trips in the last three, six or twelve months. A shortcoming of such a retrospective approach is that the quality of the data also depends on the memory of the participants. In addition, participants show a tendency to report on specific long-distance trips as there is a high motivation to give information on vacation and leisure activities. Current research indicates that people are more likely to report more distant and longer vacation trips by plane than shorter and closer trips for visiting friends and relatives. Further, long-distance travel is strongly characterized by seasonality. The time of year that people participate in a long-distance travel survey has a strong effect on the trips reported. This can lead to large bias when extrapolating and weighting survey data, e.g. an over- or underestimation of specific types of travel.
This study aims to provide insights into the mentioned challenges of collecting data on long-distance travel. For this, three different surveys conducted in Germany between 2016-2019 are compared in terms of the survey methods and the impacts on the collected data. Two of the surveys were online surveys and their focus was on collecting data on touristic and long-distance travel of individuals. The third survey was both online and paper-based and focused primarily on everyday travel, but included a specific module on vacations and business trips with overnight stays for a subset of participants. This study not only highlights the problems and challenges in data collection, but also provides indications on how to deal with them in the processing and analysis of the survey data. Of particular importance is the linkage with external data sources: Using the example of air travel, the need to link external data, for example from the Federal Statistical Office or Eurostat, with the survey data is demonstrated. The official statistics contain the number of passengers at airports and thus allow for a comparison with the extrapolated number of air travels in the survey data. However, there is the challenge that the official statistics refer to a different population, since not only residents are included in the statistics but also tourists from other countries. Therefore, adjustments must first be made to use the data as a comparative and calibration variable. For air travel, the results show that a simple extrapolation of the survey data leads to an overestimation of all air travel for the German population. This underlines that the use of external information for calibration is essential. The study concludes with a discussion of the applicability of linking survey data with external data sources and the advantages and disadvantages of different data collection methods to obtain more accurate data for long-distance travel.


Harmonizing frequency and quantity questions using equating

Dr Ranjit K. Singh (GESIS – Leibniz Institute for the Social Sciences)
Mr Matthias Roth (GESIS – Leibniz Institute for the Social Sciences) - Presenting Author

In survey research, it is a common practice to ask about frequencies or quantities with discrete response options, such as “daily”, “most days”, “once a week”, “once a month” or “less often”. And even in cases where such quantities or frequencies were collected as precise amounts, the data is usually binned into discrete categories to protect respondents’ privacy. In this context, different surveys usually provide such data in different category schemes. This may mean different numerical category boundaries or differences in granularity (i.e., the number of categories). For researchers trying to compare or harmonize data from two or more surveys, this poses three challenges:(1) How to harmonize the different numerical categories, (2) how to correct for response biases and sometimes(3) how to harmonize discrete response alternatives (e.g., “daily”) with response alternatives posed in relative terms (e.g., “often”).
To better understand these challenges, let us consider a question about the number of books the respondent possesses. The first challenge is to reconcile different numerical boundaries. Imagine one survey providing the category "26-50 books” and the other providing a category “20-40”. Aside from such a mismatch of numerical boundaries, there is also the issue of granularity: For example, one survey providing three broad categories, and another seven detailed ones. The second challenge occurs, because category schemes may bias respondents’ answers. For example, if a survey provides categories that cover a very large number of books, respondents may feel pressured to overreport the number of books they possess. And there is a substantive body of research demonstrating just such biases. If such biases occur, then we can no longer assume that all respondents chose the objectively correct numerical category. The last challenge focuses on a special case: What if one survey asks about the books with absolute numerical categories (e.g., “less than twenty books”) and the other about books in relative terms (e.g., “few books”)? Here, the question arises how many books respondents consider to be “few” and so on.
In this talk, we present the results of two method experiments which show that these challenges can be overcome by using equipercentile equating. Equipercentile equating is originally a method from psychometry to make the results of different psychometric tests comparable. However, we aim to show that its mathematical logic can also be applied to measures of frequency or quantities. We aim to show that equipercentile equating harmonizes efficiently between scales, including discrete and relative, while minimizing the effects of response bias. This approach to harmonization is useful for researchers trying to create datasets from different survey instruments covering similar constructs. It is also promising for data producers and archives who can use equipercentile equating to improve comparability and heal breaks in time-series.


Mode Effects on Measurement: Do CAWI and CASI Survey Estimates Differ?

Mrs Katya Kostadintcheva (Institute for Social and Economic Research) - Presenting Author
Professor Nick Allum (Institute for Social and Economic Research)

Mixed mode surveys, where two or more modes are used for data collection with the responses from all modes combined at the analysis stage, have become common practice in recent years. There are a number of benefits of this design, such as better coverage and response rates, by reaching out to different groups of people, and lower costs. Nevertheless, using mixed mode presents a number of challenges and the effect on measurement is one of them. Most of the research on the effect of mode on measurement has focused on comparing interviewer-administered surveys with self-completion modes. However, very little is known about differences between computerised self-completion modes, such as Computer-Assisted Web Interviewing (CAWI) and Computer-Assisted Self-Interviewing (CASI), and the effect on measurement when used in combination. Understanding Society, the UK Household Longitudinal Study is one such survey where CAWI and CASI are used together for data collection. Up to wave 7 Understanding Society was a single mode face-to-face survey where all respondents completed a CASI section using the interviewer’s laptop with the interviewer present. With the introduction of mixed mode design from wave 7 of the survey some interviews were completed in CAWI by the respondents. As such under mixed mode design a proportion of respondents complete survey questions in one mode, CASI, with the rest of respondents completing the same questions in a different self-completion mode, CAWI. Understanding if and how survey estimates differ between CAWI and CASI, and if the differences are larger for sensitive questions, are the two research questions for this paper. The measures analysed in this research cover subjective questions and some sensitive items. This includes the General Health Questionnaire (GHQ-12) and the SF-12 Health Survey, which are widely-used self-reporting instruments for common mental disorders and overall health in the general population. We adopt two approaches for examining the effect of mode on responses by making use of a mode allocation experiment in wave 8 of Understanding Society. Overall we find that there is little difference in measurement between the CAWI and CASI, including for sensitive questions. Marginal differences in distributions are likely due to selection effects. Our findings suggest that CAPI-CAWI mixed mode designs are unlikely to be especially problematic from a measurement perspective.


Using Additional Information at Regional Level to Recruit Less Educated Survey Participants

Dr Uta Landrock (LIfBi - Leibniz Institute for Educational Trajectories) - Presenting Author
Professor Christian Aßmann (LIfBi - Leibniz Institute for Educational Trajectories)

The German National Educational Panel Study NEPS collects data on educational processes. We find that less educated participants are underrepresented in our surveys. Therefore, we want to research whether it is possible to identify less educated target persons in the gross sample before data collection, so that we could adjust our recruitment strategies. The idea is to use additional information at regional level (street section), provided by the infas360 database, to develop an index for identifying less educated people. Relevant characteristics at regional level might be, for example, quality of the residential area (unemployment rate, predominant social class, density of buildings, etc.), types of building (living space per household, purchase price per square meter), or purchasing power (income).
Our first database consists of a gross sample (N=2,671), its address information were enriched with additional information at regional level. The net sample (N=504) includes information on the educational attainments of the respondents. We conducted exploratory factor analyses on the base of the additional information at regional level and identified five factors. Preliminary results indicate that at least one factor differs significantly between gross sample and net sample. Furthermore, there are significant and robust associations between the factor and educational variables of the respondents in the net sample. These results give hints that additional information may help to identify less educated participants. In a next step, on the base of the results of the factor analysis, we want to develop an index approximating the educational level of an address of the gross sample.
Finally, we want to apply this procedure for a subsequent study to answer the research question, whether it is possible to identify less educated people before data collection. The field work for the subsequent study will presumably begin in early 2022. The data collection consists of a competence assessment of children (6-7 years) and a CAPI survey with their parents on their income, educational attainments and occupational situation. The gross sample (N=5,000) will be enriched with the index for identifying less educated target persons. During the fieldwork all contact attempts (information for each address, whether a contact attempt was made), successful contacts (whether a contact was realized with a target person), and realized interviews were documented to allow differentiation of non-contacts, refusals and participations. To answer the research question regarding identifying less educated participants before data collection, we have to address three questions: Does the index differ between the subgroups of non-contacts, refusals and participations? Is the index predictive for participation? And is it predictive for the measured competence level? Thus, we are able to investigate whether the index generated ex ante from external information is also empirically an appropriate proxy measure of low education.


Using administrative records to improve fieldwork efficiency of a child-boost sample: Experiences from the Scottish Health Survey

Mrs Shanna Christie (ScotCen Social Research) - Presenting Author

The Scottish Health Survey (SHeS) is a cross-sectional household study first conducted in 1995 then 1998, 2003 and annually since 2008. The survey is designed to be representative of the Scottish population living in private households and uses a probability sampling approach based on the Postcode Address File (PAF). From the sampled addresses (c.12,000) we aim to interview c.5,100 adults and c.2,000 children each year. The interview collects information about individuals’ health status and conditions as well as lifestyle factors which can impact health such as diet, smoking, alcohol consumption and physical activity.

A key objective of the SHeS is to collect health data about children. The sampling approach used to interview c5,100 adults only produces data from c1,000 children. Thus, it is also necessary to draw a child boost sample to meet the target of 2,000 child interviews. The child boost sample uses the same sampling frame but because there is little information about the household (other than the address) a high proportion of these households are not eligible as there are no children living there (c.80% are ineligible). This involves a great deal of resources for doorstep screening (interviewer time and travel costs) to achieve a relatively small sample.

The inefficiency of this approach prompted considerations about the potential to supplement the PAF sample with administrative health data to identify eligible households (i.e. those with a child aged under 16) with the aim of improving fieldwork efficiency. In 2020 we conducted an experiment whereby the PAF sample which had been screened for eligibility by field interviewers was compared with the PAF sample matched to health records. This provides an example of a low risk PAF sample frame enhancement to identify key sub-populations of interest and better target increasingly limited fieldwork resources by using administrative data. This is the first time in Scotland this approach has been used to identify households with children.

The presentation will outline the design and findings of the experiment, provide learnings from our experience of applying this on the SHeS, consider next steps and the potential for rolling out on a larger scale and for other sub-sample applications.

The Scottish Health Survey is funded by the Scottish Government and undertaken by ScotCen Social Research and the Office for National Statistics. Study website: https://www.gov.scot/collections/scottish-health-survey/