ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Measurement Error: Factors Influencing the Quality of Survey Data and Possible Corrections 1

Session Organisers Dr Lydia Repke (GESIS - Leibniz Institute for the Social Sciences)
Ms Fabienne Krämer (GESIS - Leibniz Institute for the Social Sciences)
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
TimeWednesday 19 July, 11:00 - 12:30
Room U6-06

High-quality survey data are the basis for meaningful data analysis and interpretation. The choice of specific survey characteristics (e.g., mode of data collection) and instrument characteristics (e.g., number of points in a response scale) affects data quality, meaning that there is always some measurement error. There are several methods and tools for estimating these errors (e.g., the Survey Quality Predictor) and approaches for correcting them in data analysis. This session will discuss factors that influence data quality, methods or tools for estimating their effects, and approaches for correcting measurement errors in survey data.

We invite papers that
(1) identify and discuss specific survey characteristics and their influence on data quality;
(2) identify and discuss specific instrument characteristics and their impact on data quality;
(3) discuss methods of estimating measurement errors and predicting data quality;
(4) present or compare tools for the estimation or correction of measurement errors;
(5) show how one can account for and correct measurement errors in data analysis.

Keywords: measurement error, data quality, correction, survey characteristics, item characteristics

Papers

Third-Party Influence: Effects of Spouse Presence during Survey Interviews on Life Satisfaction Scores in Germany

Ms Zoe Weisel (Eurac Research - Center for Advanced Studies) - Presenting Author
Dr Agnieszka Stawinoga (Eurac Research - Statistics Office)

As has proven to be a reoccurring phenomenon within survey research, more than a third (Reuband 1987) of all interviews are conducted in the presence of others. Third party presences during interviews may have an effect on the interviewee’s response behavior and could potentially influence survey results, thus representing a form of response error in survey data. Therefore, this paper aims to examine the relationship between third party presence and survey responses. More specifically, it will contribute to the discussion surrounding third party influences on interview data by exploring the effects of spouse presence during interviews on response behavior regarding questions on life satisfaction. The context of this study will be narrowed down to Germany by using the German General Social Survey (ALLBUS) as a data base and looking specifically at its sample of married individuals. Here, the comparison of response behavior will be drawn between life satisfaction scores of married interview respondents with spouses present and those without their spouses present during the interview. Taking the limits of the data set into account, such as the lack of keeping track of the length of spouse presence occurrence, the effect of spouse presence on life satisfaction responses will be examined.


Mapping the circumstances and side activities during survey completion and their consequences in a web survey

Mr Adam Stefkovics (Center for Social Sciences) - Presenting Author
Mr Bence Ságvári (Center for Social Sciences)
Mrs Vera Messing (Center for Social Sciences)
Ms Blanka Szeitl (Center for Social Sciences)

As response rates of traditional interviewer-administered surveys are constantly dropping and their costs are rising, web surveys are becoming increasingly popular. However, self-completion modes provide the researcher with limited control over the completion. In the absence of an interviewer respondents of web surveys likely engage in side activities, and get disorientated or distracted, all of which may introduce additional measurement error. Despite this, we know little about web survey respondents' environment and side activities. To this end, we asked several questions in a web survey (n=2000) conducted in Hungary on a non-probability-based panel about the different forms of multitasking, the device, the setting, the time of completion, and standard ESS questions for the data quality checks. We identified five distinct types of web survey respondents using latent class analysis. Two of these groups engaged in different forms of multitasking or completed the survey in a non-ideal environment (e.g., on a small screen, on their way with many people around). These groups showed some differences regarding the quality of responses they reported. Particularly one respondent group yielded higher response time, and higher straightlining (but no difference regarding item-nonresponse). Our findings suggest the lack of control over the completion allows the inclusion of a number of inattentive multitaskers which may introduce significant bias in the measurement and increase the TSE. The presentation offers a toolkit of items for the identification of such respondents in self-completion web surveys.










Investigating Measurement Error in Mixed-Mode Establishment Surveys

Ms Corinna König (Institute for Employment Research) - Presenting Author
Mr Joe Sakshaug (Institute for Employment Research)

Like many social surveys, establishment surveys are (or have been) transitioning from traditional interviewer modes to online and mixed-mode data collection. One example is the IAB Establishment Panel of the Institute for Employment Research (IAB), which until 2018 was primarily a face-to-face survey. Since then, the IAB has experimented with administering a sequential web-first followed by face-to-face mixed-mode design versus the traditional face-to-face design. Previous analyses have shown that the mixed-mode design maintains response rates at lower costs compared to the single-mode (face-to-face) design, but the question remains to what extent introducing the web mode affects measurement quality – a question that has rarely been addressed in the establishment survey literature. In this presentation, we address this research question by comparing the survey responses from the single- and mixed-mode experimental groups to corresponding administrative data from employer-level social security notifications. Treating the administrative data as the “gold standard”, the accuracy of survey responses in both mode designs is assessed and measurement equivalence is evaluated. In addition, we report on differences in accuracy between the individual web and face-to-face modes. Furthermore, we consider differences for several alternative indicators of response quality and behavior, including item nonresponse, social desirability responding, rounded values, acquiescence, and primacy and recency effects. Thus, the study provides comprehensive insights into data quality for mixed-mode data collection in establishment surveys and informs survey practitioners about the implications of switching from single- to mixed-mode designs in large-scale establishment panels.


Attitudes, Knowledge, and Behavior over Time: How Repeated Interviewing Triggers Reflection Processes in Respondents

Ms Fabienne Kraemer (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Mr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)
Ms Bella Struminskaya (Utrecht University)
Mr Michael Bosnjak (Trier University)
Mr Matthias Sand (GESIS - Leibniz Institute for the Social Sciences)
Mr Bernd Weiß (GESIS - Leibniz Institute for the Social Sciences)

Longitudinal surveys are widely acknowledged for their possibility to track attitudes and behaviors over time and thus, to study the change or stability of social patterns. However, previous research suggests that repeated interviewing itself leads to changes in respondents’ attitudes and behaviors by increasing the awareness of survey topics and by triggering intensive reflection processes as well as further information search on addressed topics (i.e., cognitive stimulus model; Sturgis et al., 2009). In line with this, previous studies have shown that respondents’ attitudes become more stable and reliable over time, knowledge levels increase, respondents become more opinionated, and the saliency of addressed topics increases, followed by changes in behavior. However, the existing studies are mostly non-experimental and only assume that attitudes, behavior, and knowledge change as a result of reflection. Up to date, the mechanisms underlying changes in these question types are largely unknown.
Our study investigates attitudes, behavior, and knowledge over the course of a panel study as well as the underlying mechanisms of observed changes with a randomized experiment carried out both within a German probability-based panel study and a German online access panel. Our study comprises six panel waves in total, allowing us to analyze respondents’ repeated answers to over 40 attitudinal, behavioral, and knowledge measures. Moreover, the measurement of self-reported response certainties, attitude strength, and topic saliency over time enables us to gain further insights on the underlying mechanisms of observed changes. Further, the experimental design of our study, which manipulates the frequency of receiving identical questions over the course of the six waves (1 time vs. 3 times vs. 6 times), allows us to systematically investigate how different levels of exposure to the identical questions affect attitudes, behavior, and knowledge over time.