ESRA logo

ESRA 2019 glance program


Assessing the Quality of Survey Data 4

Session Organiser Professor Jörg Blasius (University of Bonn)
TimeWednesday 17th July, 11:00 - 12:30
Room D02

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Keywords: Quality of data, task simplification, response styles, satisficing

So, Interviewers Deviate from Question Script. What Does it Mean for Measurement Error?

Ms Jennifer Kelley (University of Essex) - Presenting Author
Dr Tarek Al Baghal (University of Essex)

In standardized interviewer-administered surveys, the interviewer is tasked with reading all questions exactly as worded. However, research has shown interviewers go off script, engaging in both minor and major deviations. Researchers argue that major deviations most likely change the meaning of the question, thus increasing measurement error. However, there has been very few studies that evaluate whether or not this assumption is accurate. Those studies that have assessed interviewer question-reading deviations have reported mixed findings. Results from these studies show deviations, in some cases, do increase measurement error, while other studies have shown question-reading deviations have no impact on measurement error, or in some cases, actually decrease measurement error. Also, the data from these studies come from either lab settings or CATI surveys, where research has shown the rate and type of deviations is much lower than fielded, face-to-face interviews. Hence, there is still much debate on how or if interviewer question-reading deviations affect measurement error. Further it is unknown how question-reading deviations affect measurement error in face-to-face surveys.
To evaluate question-reading deviations and data quality in face-to-face surveys, this study used interview recordings, paradata and survey data from Wave 3 of the Understanding Society Innovation Panel (IP). Interviews were behavior coded on whether the interviewer read questions as verbatim or committed a minor deviation or major deviation. To assess data quality, several measures are used, including item non-response and differences in distributions for questions that are read verbatim (or have minor deviations) and questions that have major deviations. In addition, this study exploits several IP Wave 3 experiments on question formation (e.g., branching and presence of showcards) to evaluate whether or not the measurement error (i.e., differential response distributions) found for different question formations can be partially attributed to interviewer question-reading deviations.


Differences Between Real Data Describing the Respondent and Data Falsified by Interviewers

Dr Uta Landrock (LIfBi - Leibniz Institute for Educational Trajectories) - Presenting Author

In face-to-face interviews, the interviewer is an important actor with relevant positive and potentially negative influence on data quality. The most undesirable impact occurs when interviewers decide to falsify. Therefore, it is important to know how real and faked interviews differ. We show how the respondents' personal self-descriptions (i.e. the variables describing the respondents) differ between real and falsified survey data: Who is "Joe Average" in the real and in the faked data? Our database consists of two data sets: In a quasi-experimental setup, 78 interviewers conducted 710 real face-to-face interviews. The interviewers then falsified 710 corresponding interviews in the lab. To answer the research question, the socioeconomic variables on income and social class as well as personality traits are examined, but also the data on height and weight of the respondents, and additionally the reported behaviors. The interviewer's assessment of the attractiveness of the respondents and the data on the consent to be re-contacted can also be used to describe and compare the "Joe Averages" of the two subsamples. Preliminary results show that there are strong similarities between real and "faked" respondents. However, there are also significant differences between the two subsamples, e.g. regarding healthy eating behavior, self-efficacy and self-placement on the left-right scale. These first results prove that, on the one hand, falsifiers are able to reproduce realistic proportions and means of variables. On the other hand differences occur that may be used to identify faked interviews. Referring to earlier findings on interviewer falsifications, it can be observed that the implicit everyday knowledge of the falsifiers may lead to "good" falsifications. However, information that touches common but untrue stereotypes leads the falsifiers to misjudge social reality and produce deviations from true scores.


Filter Patterns from Questionnaire Graphs: A Simple Example

Ms Katharina Stark (Leibniz Institute for Educational Trajectories) - Presenting Author
Ms Sabine Zinn (Leibniz Institute for Educational Trajectories)

In this talk we present first steps towards an automated approach for studying two essential quality issues related to survey data. First, our approach supports the detection of interviewer misbehaviour. Second, it allows studying the meaningfulness of highly nested filter structures in the questionnaire which may lead to case numbers hindering feasible statistical analysis. The basic idea is using a theoretical graph model for describing all possible pathways through a survey questionnaire. The nodes of the respective questionnaire graphs represent questions and their edges the paths that link the questions. That way, we are able to derive key properties of the questionnaire in a very straightforward way by simply studying its corresponding graph. Examples of such key properties are all traversed paths of the questionnaire, and thus all possible data patterns resulting through filtering. In order to use the theoretical graph model for examining the appropriateness of the survey data, the theoretical graph properties are compared with empirical survey data. Significant differences between both can indicate problems in the data collection process. For example, a very high frequency of certain paths in the data could indicate interviewer misbehavior. On the contrary, very low frequencies of certain paths are a sign of path redundancy. In order to study the capability of our approach and to reveal potential hurdles, we designed a small case study. We constructed a short questionnaire on smoking behavior. The questionnaire comprises ten survey and three filter questions resulting in 31 possible data patterns by filtering. We applied this questionnaire to a synthetic sample of 200 persons. Assuming an error-free questionnaire, we use simulation to study whether we can detect interviewer misbehavior and clear interrelations between population composition, sample size and paths frequencies. We rely on scenarios derived from experiences made by NEPS and other large surveys.


Anomalies in Survey Data

Professor Jörg Blasius (University of Bonn) - Presenting Author
Mr Lukas Sausen (University of Bonn)

While many factors, such as unit- and item nonresponse, threaten data quality, we focus on data contamination that arises primarily from task simplification processes. We argue that such processes can occur at two levels. First, respondents themselves may engage in various response strategies that minimize their time and effort in completing the survey. Second, interviewers and other employees of the research institute might take various shortcuts to reduce their time and/or to fulfil the requirements of their contracts; in the simplest form this can be done via copy-and-paste procedures.
This paper examines the cross-national quality of the reports from principals of schools participating in the 2012 PISA. For the 2009 PISA data, Blasius and Thiessen already showed that in several countries a significant number of cases in principal’s survey were fabricated via copy-and-paste. However, Blasius and Thiessen concentrated on strings that are 100 percent identical and did not detect cases in which a very small number of entries has been changed, for example, when employees copied cases and changed a few values, in doubt only one or two. Applying string distance metrics such as the Levenshtein-distance, we extend our approach to detect these cases.


Surveying Disadvantaged Adolescents: Representation and Measurement Biases

Dr Susanne Vogl (University of Vienna) - Presenting Author

Collecting data in an online-panel survey on adolescents’ transitions after secondary school holds many challenges: I focus here on representation and measurement errors. I will present results from a mixed methods panel study conducted with 14 to 16 year-olds online in Vienna in 2018 and reflect on experiences with the recruitment effort and outcome rates when schools and school authorities are involved and guardians’ as well as adolescents’ consent is required. Due to the multiple actors in the sampling process, sample biases are inevitable. I will critically review our experiences and draw conclusions for future research.
Furthermore, with low educational attainment and more than half of the respondents having German as their second language, measurement quality is also in danger. Thus, we paid special attention to questionnaire design and pretesting. Additionally, to keep up motivation and attention, I introduced a split-ballot experiment with video-clips between thematic blocs, forced choice, and delayed display of the submit button. I examine the effect of these treatment conditions on duration, break-offs, item-nonresponse and response patterns.
The aim of the contribution is, to discuss practical requirements and problems surveying disadvantaged adolescents as one hard-to-reach population and showcase strategies taken in recruitment, questionnaire design and survey techniques, their implications and effects. The lessons learnt can promote methodological discussion generally and reseraching disadvantaged adolescents in particular.