ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Assessing the Quality of Survey Data 1

Session Organiser Professor Jörg Blasius (University of Bonn)
TimeWednesday 19 July, 11:00 - 12:30
Room U6-01e

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Keywords: Quality of data, task simplification, response styles, satisficing

Papers

Detection and handling of response time outliers in online surveys

Dr Tanja Kunz (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Ms Patricia Hadler (GESIS - Leibniz Institute for the Social Sciences)

In online surveys, response time data are often used to draw conclusions about respondents’ processing of survey questions and assess the quality of survey data. However, detecting and handling outliers that are extremely long or short response times is crucial before analyzing response time data. There are several methods for detecting outliers, but little empirical evidence to guide survey researchers on which method to use. We compared nine outlier detection methods, using nine questions that differ in key characteristics and data from a probability and a nonprobability online panel. The results show that the outlier detection methods differ considerably in terms of the proportion of outliers detected and the effects of outlier exclusion on response time data (e.g., mean, skewness, and kurtosis); the effects of outlier exclusion are more pronounced in the nonprobability panel. The effects of outlier exclusion on substantive findings and recommendations for outlier detection methods to use are discussed.


Investigating Panel Conditioning Effects in the Life in Australia(TM) Panel

Mr Sam Slamowicz (The Social Research Centre) - Presenting Author
Professor Darren Pennay (Australian National University)
Professor Paul Lavrakas (The Social Research Centre)

As survey research evermore migrates to online panels, one threat facing researchers is the possibility that data quality could be diminished by panel conditioning effects caused by repeatedly interviewing the same respondents over time.

This presentation reports findings from an ongoing study investigating panel conditioning in Australia’s only probability-based online panel – Life in Australia(TM). Our focus is on looking for changes in reporting of attitudes over time in a manner consistent with the Cognitive Stimulus Theory (CST) (Sturgis, et al. 2009). CST hypothesises that repeated exposure to similar questions over time leads to a change in attitudes amongst some panellists, which will be manifested as a crystallisation of expressed attitudes between the first and subsequent waves of a survey.

An important aspect of panel conditioning is whether there is a reduction in socially desirable reporting over time as panellists become more comfortable with the panel and its sponsors, and more willing to report socially undesirable behaviours and attitudes.

Our analysis (n=1459 for five waves) reveals evidence of arguably “beneficial” panel conditioning, demonstrated by an increase in reliability and stability of responses to attitudinal questions. However, we have not found compelling evidence of a decline in socially desirable reporting as panellists acclimate to the panel environment.

We will also be reporting on variations in panel conditioning across population subgroups and by question type, and whether panellists show evidence of negative panel conditioning effects such as speeding, straight-lining and other forms of careless reporting.


Frequency, Extent and Characteristics of Straightlining: Results from a Large-Scale Online Student Survey in Germany

Dr Martina Kroher (German Centre for Higher Education Research and Science Studies) - Presenting Author
Mr Karsten Becker (German Centre for Higher Education Research and Science Studies)
Mr Jonas Koopmann (German Centre for Higher Education Research and Science Studies)

Various aspects of nonsampling error can influence the accuracy of data generated in surveys in the sense of total survey error (Groves 1989; Groves et al. 2009). This includes measurement errors occurring in the response process. Given this, the satisficing theory (Krosnick, 1991) distinguishes between "optimizing" and "satisficing". One response pattern that can be interpreted as "strong satisficing" is straightlining. For example, respondents do not always answer best but show specific response patterns (acquiescence, tendency to the middle, etc.). Another misbehavior of respondents – we take into account – is speeding, i.e., individuals do not take the time to read the questions and answers appropriately but answer instead very quickly.
Our first results indicate that older, male, and non-impaired students as well as students with children and international students show straightlining tendencies more frequently than their peers. Furthermore, we see that the time to complete the survey is associated with the device (tablet, mobile phone, desktop computer, laptop) used to participate in the survey. Additionally, straightliners complete the survey faster and use less often mobile devices.


References
Groves, R. M. (1989). Survey Errors and Survey Costs. New York: Wiley
Groves, R. M. , Fowler, F. J. , Couper, M. P. , Lepkowski, J. M. , Singer, E. , & Tourangeau, R. (2009). Survey Methodology. Hoboken, NJ: Wiley
Krosnick, J. A. (1991). Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology, 5(3), 213-236.


Detecting response styles in PISA 2018 Student Questionnaire by IR-Tree Models: A mixed approach to interpret response styles.

Miss M. Carmen Navarro-González (University of Granada) - Presenting Author
Dr José-Luis Padilla (University of Granada)
Dr Luis-Manuel Lozano (University of Granada)
Dr Álvaro Postigo (University of Granada and University of Oviedo)

Response styles (RS), such as acquiescence or extremity responding, are a concern in Likert-type rating scales given that they can reflect “satisficing” and undermine validity of survey questions responses by overestimating or underestimating the true levels in the traits (e.g., Böckenholt, 2017; Park & Wu, 2019). Böckenholt (2012; 2017) proposed the presence of multiple response processes in judgment phase when answering Likert-type rating items, processes that can be modeled by a tree-structure-based IRT model: first, respondents determine whether they agree or disagree to the item; and then, respondents decide how strong their agreement or disagreement is. These IR-Tree models can help to detect RS and to disentangle them from the substantive trait measures. The aim of this study was to apply IR-Tree models to detect acquiescence, disacquiescence, and extreme response styles of 11599 Spanish adolescents in the “Sense of Belonging to School Scale” (SBSS) from PISA 2018 (OECD, 2018). The SBSS consists of 6 four-point Likert-type items (from “Strongly agree” to “Strongly disagree”), with three items positively keyed, and the other ones negatively keyed. We tested two IR-Tree models: a descriptive model to detect two different extreme response styles, and an explanatory model to detect acquiescence and disacquiescence response styles (Park & Wu, 2019). We also explore the effect of keying direction of items. Analyses were carried out by using R. In addition, we will illustrate how to conduct a cognitive interviewing study to obtain qualitative evidence for interpreting RS results.