All time references are in CEST
Assessing the Quality of Survey Data 1 |
|
Session Organiser | Professor Jörg Blasius (University of Bonn) |
Time | Wednesday 19 July, 11:00 - 12:30 |
Room | U6-01e |
This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.
Keywords: Quality of data, task simplification, response styles, satisficing
Dr Tanja Kunz (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Ms Patricia Hadler (GESIS - Leibniz Institute for the Social Sciences)
In online surveys, response time data are often used to draw conclusions about respondents’ processing of survey questions and assess the quality of survey data. However, detecting and handling outliers that are extremely long or short response times is crucial before analyzing response time data. There are several methods for detecting outliers, but little empirical evidence to guide survey researchers on which method to use. We compared nine outlier detection methods, using nine questions that differ in key characteristics and data from a probability and a nonprobability online panel. The results show that the outlier detection methods differ considerably in terms of the proportion of outliers detected and the effects of outlier exclusion on response time data (e.g., mean, skewness, and kurtosis); the effects of outlier exclusion are more pronounced in the nonprobability panel. The effects of outlier exclusion on substantive findings and recommendations for outlier detection methods to use are discussed.
Mr Nicolas Rodriguez (University of Michigan) - Presenting Author
Dr Richard Miech (University of Michigan)
A growing number of school-based surveys are transitioning from paper-and-pencil to electronic devices for data collection, which may produce a mode effect on estimates of attitudes or beliefs toward drug use. This study tested the potential survey mode effect on self-reported levels of attitudes or beliefs about marijuana. We use data from the Monitoring the Future (MTF) study, which in 2019 provided electronic tablets for students to answer survey questions for a randomly-selected half of all schools (intervention), and traditional paper-and-pencil forms for the other half (comparator). Results indicate that the relative risk (RR) for all perceived risk, disapproval, and availability estimates were higher in electronic tablets versus paper‐and‐pencil surveys. Their 95% confidence intervals (CI) did not include the value of one for reporting intervals of the perceived risk of trying marijuana once or twice (RR = 1.25; 95% CI= 1.05–1.48), occasionally (RR = 1.22; 95% CI= 1.05–1.43), and regularly (RR = 1.15; 95% CI= 1.03–1.28), and for disapproval of trying marijuana once or twice (RR = 1.13; 95% CI= 1.01–1.27) in 12th grade. The perceived risk of trying marijuana once or twice across survey modes was significant in 8th grade (RR=1.05; 95% CI= 1.00–1.09). Interaction test showed differences in survey mode across 12th and 10th grades in perceived risk of using marijuana occasionally and regularly and disapproval of trying marijuana once or twice, and across 10th and 8th grades in perceived availability of marijuana. Levels of missing data were significantly lower for electronic tablets versus paper‐and‐pencil surveys. Missing data does not affect the direction and differences observed in the estimates analyzed. Future research is needed to evaluate the change in survey modes and its potential effect on the trend and data quality in school‐based survey administration.
Mr Sam Slamowicz (The Social Research Centre) - Presenting Author
Professor Darren Pennay (Australian National University)
Professor Paul Lavrakas (The Social Research Centre)
As survey research evermore migrates to online panels, one threat facing researchers is the possibility that data quality could be diminished by panel conditioning effects caused by repeatedly interviewing the same respondents over time.
This presentation reports findings from an ongoing study investigating panel conditioning in Australia’s only probability-based online panel – Life in Australia(TM). Our focus is on looking for changes in reporting of attitudes over time in a manner consistent with the Cognitive Stimulus Theory (CST) (Sturgis, et al. 2009). CST hypothesises that repeated exposure to similar questions over time leads to a change in attitudes amongst some panellists, which will be manifested as a crystallisation of expressed attitudes between the first and subsequent waves of a survey.
An important aspect of panel conditioning is whether there is a reduction in socially desirable reporting over time as panellists become more comfortable with the panel and its sponsors, and more willing to report socially undesirable behaviours and attitudes.
Our analysis (n=1459 for five waves) reveals evidence of arguably “beneficial” panel conditioning, demonstrated by an increase in reliability and stability of responses to attitudinal questions. However, we have not found compelling evidence of a decline in socially desirable reporting as panellists acclimate to the panel environment.
We will also be reporting on variations in panel conditioning across population subgroups and by question type, and whether panellists show evidence of negative panel conditioning effects such as speeding, straight-lining and other forms of careless reporting.
Dr Martina Kroher (German Centre for Higher Education Research and Science Studies) - Presenting Author
Mr Karsten Becker (German Centre for Higher Education Research and Science Studies)
Mr Jonas Koopmann (German Centre for Higher Education Research and Science Studies)
Various aspects of nonsampling error can influence the accuracy of data generated in surveys in the sense of total survey error (Groves 1989; Groves et al. 2009). This includes measurement errors occurring in the response process. Given this, the satisficing theory (Krosnick, 1991) distinguishes between "optimizing" and "satisficing". One response pattern that can be interpreted as "strong satisficing" is straightlining. For example, respondents do not always answer best but show specific response patterns (acquiescence, tendency to the middle, etc.). Another misbehavior of respondents – we take into account – is speeding, i.e., individuals do not take the time to read the questions and answers appropriately but answer instead very quickly.
Our first results indicate that older, male, and non-impaired students as well as students with children and international students show straightlining tendencies more frequently than their peers. Furthermore, we see that the time to complete the survey is associated with the device (tablet, mobile phone, desktop computer, laptop) used to participate in the survey. Additionally, straightliners complete the survey faster and use less often mobile devices.
References
Groves, R. M. (1989). Survey Errors and Survey Costs. New York: Wiley
Groves, R. M. , Fowler, F. J. , Couper, M. P. , Lepkowski, J. M. , Singer, E. , & Tourangeau, R. (2009). Survey Methodology. Hoboken, NJ: Wiley
Krosnick, J. A. (1991). Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology, 5(3), 213-236.
Miss M. Carmen Navarro-González (University of Granada) - Presenting Author
Dr José-Luis Padilla (University of Granada)
Dr Luis-Manuel Lozano (University of Granada)
Dr Álvaro Postigo (University of Granada and University of Oviedo)
Response styles (RS), such as acquiescence or extremity responding, are a concern in Likert-type rating scales given that they can reflect “satisficing” and undermine validity of survey questions responses by overestimating or underestimating the true levels in the traits (e.g., Böckenholt, 2017; Park & Wu, 2019). Böckenholt (2012; 2017) proposed the presence of multiple response processes in judgment phase when answering Likert-type rating items, processes that can be modeled by a tree-structure-based IRT model: first, respondents determine whether they agree or disagree to the item; and then, respondents decide how strong their agreement or disagreement is. These IR-Tree models can help to detect RS and to disentangle them from the substantive trait measures. The aim of this study was to apply IR-Tree models to detect acquiescence, disacquiescence, and extreme response styles of 11599 Spanish adolescents in the “Sense of Belonging to School Scale” (SBSS) from PISA 2018 (OECD, 2018). The SBSS consists of 6 four-point Likert-type items (from “Strongly agree” to “Strongly disagree”), with three items positively keyed, and the other ones negatively keyed. We tested two IR-Tree models: a descriptive model to detect two different extreme response styles, and an explanatory model to detect acquiescence and disacquiescence response styles (Park & Wu, 2019). We also explore the effect of keying direction of items. Analyses were carried out by using R. In addition, we will illustrate how to conduct a cognitive interviewing study to obtain qualitative evidence for interpreting RS results.