ESRA 2019 Draft Programme at a Glance

Assessing the Quality of Survey Data 4

Session Organiser Professor Jörg Blasius (University of Bonn)
TimeWednesday 17th July, 11:00 - 12:30
Room D02

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Keywords: Quality of data, task simplification, response styles, satisficing

The Impact of Response Styles on Parent-Child Similarity in Anti-Immigrant Attitudes

Dr Cecil Meeusen (Erasmus University Rotterdam ) - Presenting Author
Dr Caroline Vandenplas (KU Leuven)

Parents are found to have a considerable impact on their children’s feelings of prejudice toward immigrants through the system of intergenerational transmission of attitudes. Directly or indirectly, parents can steer the way their children think about ethnic diversity and intergroup relations. Usually intergenerational similarity is empirically assessed by means of correlational analysis (relative similarity) and percentage agreement on scale scores (absolute similarity). Current research overlooks one potential confounder of parent-child attitudinal correspondence: acquiescence and extreme response styles. These types of response bias might have substantial consequences for research on intergenerational similarity as this field does not take in any way into account that parents and children might also resemble each other as a consequence of a methodological artefact: parents and children might have the same response style behavior, i.e., they may correspond in the way they answer survey question, irrespective of its content.
The purpose of this paper is threefold. First, we want to assess which type of adolescents show more or less response style behavior. Second, we want to quantify to what extent parents and children correspond in response style behavior. Third, we want to investigate what the effect of (intergenerational similarity) in response bias is for the current state of the art regarding parent-child similarity in social and political attitudes. We use data from the Parent-Child Socialization Study, a two-wave random probability sample of mothers, fathers, and their children in Flanders, the Dutch-speaking part of Belgium (N > 3000).

So, Interviewers Deviate from Question Script. What Does it Mean for Measurement Error?

Ms Jennifer Kelley (University of Essex) - Presenting Author
Dr Tarek Al Baghal (University of Essex)

In standardized interviewer-administered surveys, the interviewer is tasked with reading all questions exactly as worded. However, research has shown interviewers go off script, engaging in both minor and major deviations. Researchers argue that major deviations most likely change the meaning of the question, thus increasing measurement error. However, there has been very few studies that evaluate whether or not this assumption is accurate. Those studies that have assessed interviewer question-reading deviations have reported mixed findings. Results from these studies show deviations, in some cases, do increase measurement error, while other studies have shown question-reading deviations have no impact on measurement error, or in some cases, actually decrease measurement error. Also, the data from these studies come from either lab settings or CATI surveys, where research has shown the rate and type of deviations is much lower than fielded, face-to-face interviews. Hence, there is still much debate on how or if interviewer question-reading deviations affect measurement error. Further it is unknown how question-reading deviations affect measurement error in face-to-face surveys.
To evaluate question-reading deviations and data quality in face-to-face surveys, this study used interview recordings, paradata and survey data from Wave 3 of the Understanding Society Innovation Panel (IP). Interviews were behavior coded on whether the interviewer read questions as verbatim or committed a minor deviation or major deviation. To assess data quality, several measures are used, including item non-response and differences in distributions for questions that are read verbatim (or have minor deviations) and questions that have major deviations. In addition, this study exploits several IP Wave 3 experiments on question formation (e.g., branching and presence of showcards) to evaluate whether or not the measurement error (i.e., differential response distributions) found for different question formations can be partially attributed to interviewer question-reading deviations.

Differences Between Real Data Describing the Respondent and Data Falsified by Interviewers

Dr Uta Landrock (LIfBi - Leibniz Institute for Educational Trajectories) - Presenting Author

In face-to-face interviews, the interviewer is an important actor with relevant positive and potentially negative influence on data quality. The most undesirable impact occurs when interviewers decide to falsify. Therefore, it is important to know how real and faked interviews differ. We show how the respondents' personal self-descriptions (i.e. the variables describing the respondents) differ between real and falsified survey data: Who is "Joe Average" in the real and in the faked data? Our database consists of two data sets: In a quasi-experimental setup, 78 interviewers conducted 710 real face-to-face interviews. The interviewers then falsified 710 corresponding interviews in the lab. To answer the research question, the socioeconomic variables on income and social class as well as personality traits are examined, but also the data on height and weight of the respondents, and additionally the reported behaviors. The interviewer's assessment of the attractiveness of the respondents and the data on the consent to be re-contacted can also be used to describe and compare the "Joe Averages" of the two subsamples. Preliminary results show that there are strong similarities between real and "faked" respondents. However, there are also significant differences between the two subsamples, e.g. regarding healthy eating behavior, self-efficacy and self-placement on the left-right scale. These first results prove that, on the one hand, falsifiers are able to reproduce realistic proportions and means of variables. On the other hand differences occur that may be used to identify faked interviews. Referring to earlier findings on interviewer falsifications, it can be observed that the implicit everyday knowledge of the falsifiers may lead to "good" falsifications. However, information that touches common but untrue stereotypes leads the falsifiers to misjudge social reality and produce deviations from true scores.

Filter Patterns from Questionnaire Graphs: A Simple Example

Ms Katharina Stark (Leibniz Institute for Educational Trajectories) - Presenting Author
Ms Sabine Zinn (Leibniz Institute for Educational Trajectories)

In this talk we present first steps towards an automated approach for studying two essential quality issues related to survey data. First, our approach supports the detection of interviewer misbehaviour. Second, it allows studying the meaningfulness of highly nested filter structures in the questionnaire which may lead to case numbers hindering feasible statistical analysis. The basic idea is using a theoretical graph model for describing all possible pathways through a survey questionnaire. The nodes of the respective questionnaire graphs represent questions and their edges the paths that link the questions. That way, we are able to derive key properties of the questionnaire in a very straightforward way by simply studying its corresponding graph. Examples of such key properties are all traversed paths of the questionnaire, and thus all possible data patterns resulting through filtering. In order to use the theoretical graph model for examining the appropriateness of the survey data, the theoretical graph properties are compared with empirical survey data. Significant differences between both can indicate problems in the data collection process. For example, a very high frequency of certain paths in the data could indicate interviewer misbehavior. On the contrary, very low frequencies of certain paths are a sign of path redundancy. In order to study the capability of our approach and to reveal potential hurdles, we designed a small case study. We constructed a short questionnaire on smoking behavior. The questionnaire comprises ten survey and three filter questions resulting in 31 possible data patterns by filtering. We applied this questionnaire to a synthetic sample of 200 persons. Assuming an error-free questionnaire, we use simulation to study whether we can detect interviewer misbehavior and clear interrelations between population composition, sample size and paths frequencies. We rely on scenarios derived from experiences made by NEPS and other large surveys.

Anomalies in Survey Data

Professor Jörg Blasius (University of Bonn) - Presenting Author
Mr Lukas Sausen (University of Bonn)

While many factors, such as unit- and item nonresponse, threaten data quality, we focus on data contamination that arises primarily from task simplification processes. We argue that such processes can occur at two levels. First, respondents themselves may engage in various response strategies that minimize their time and effort in completing the survey. Second, interviewers and other employees of the research institute might take various shortcuts to reduce their time and/or to fulfil the requirements of their contracts; in the simplest form this can be done via copy-and-paste procedures.
This paper examines the cross-national quality of the reports from principals of schools participating in the 2012 PISA. For the 2009 PISA data, Blasius and Thiessen already showed that in several countries a significant number of cases in principal’s survey were fabricated via copy-and-paste. However, Blasius and Thiessen concentrated on strings that are 100 percent identical and did not detect cases in which a very small number of entries has been changed, for example, when employees copied cases and changed a few values, in doubt only one or two. Applying string distance metrics such as the Levenshtein-distance, we extend our approach to detect these cases.