ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Assessing the Quality of Survey Data 2

Session Organiser Professor Jörg Blasius (University of Bonn)
TimeTuesday 16th July, 14:00 - 15:30
Room D02

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Keywords: Quality of data, task simplification, response styles, satisficing

Racial Ethic Differences in the Associations between Verbal and Non-verbal Behavior Codes

Professor Young Cho (University of Wisconsin-Milwaukee, Zilber School of Public Health) - Presenting Author
Professor Timothy Johnson (University of Illinois at Chicago, Survey Research Laboratory)
Professor Holbrook Allyson (University of Illinois at Chicago, Survey Research Laboratory)

Download presentation

Background: Behavior coding has been used to evaluate survey questions and to understand the survey interaction process. Behavior coding has additionally been used to examine difficulties with specific cognitive processes involved in responding to survey questions. A relatively new innovation is the use of behavior coding specifically to investigate cultural variability in respondent behaviors during interviews. The current paper examines associations between verbal and non-verbal behavior codes, and ethnic differences in the level of those associations. Methods: A total of 384 respondents of African American, non-Hispanic white, and Korean Americans were included. Their responses to 47 questions were behavior coded to measure verbally expressed comprehension and mapping difficulties, and non-verbally communicated difficulties in 5 different ways: 1) changes in body position, 2) body action, 3) self-touching action, 4) head action, and 5) facial expression change or movement. A total of 18,095 responses were coded and available for analysis. Findings: Controlling for language, ethnicity, and other demographic variables, the levels of association between verbal and nonverbal codes were significant. Verbal behavior codes for comprehension and mapping difficulties were associated highly with all nonverbal behaviors. Some of the racial ethnic groups were shown to be non-verbally less expressive, and the level of association between these two sets of behavior codes somewhat varied across racial ethnic groups. Koreans, for example, were less likely to use facial expression, head action, and self-touching when they have comprehension difficulties, compared to whites. When respondents have mapping difficulties, non-white racial ethnic groups are all more likely to employ non-verbal expression compared to whites in general. Conclusion: Response processing difficulties can be detected via non-verbal respondent behaviors in addition to verbal ones across groups of different ethnic background, with some noticeable differences in the likelihood of exhibiting non-verbal expression when they experience such difficulties.


Effects of Presence of Others; Reporting on Sensitive and Attitude Questions in Demographic and Health Survey (DHS): Case Study from Turkey, 2013

Ms Farhia Salat Mohamud (Hacettepe University) - Presenting Author

The effects of the presence of others is well established in survey literature. However, this issue has not been extensively studied in the context of Turkey; where response rates are high and patriarchal norms prevail.
We used data from the 2013 Turkey Demographic and Health Survey, a national, face-to-face household survey. Generally, DHS surveys require the interviewer to make all efforts to ensure respondents’ privacy. However, these efforts do not always translate into complete privacy in the field. In the TDHS-2013, accordingly mothers were present in 3.1%, mother-in-laws were in 1.6%, men in 1.8%, other women in 9.5% of the interviews.
We focused on 4 main groups of variables: opinions on patriarchal norms, religious behavior, questions on fertility-related behavior, alcohol use and voting.
Our findings indicated that the presence of mothers-in-law and other women were often associated with higher proportions of agreement to statements reinforcing gender norms (including wife beating). The presence of males also seems to have made the same effect, though to a lesser extent. The presence of biological mothers was not significant. When no one was present, lower proportion of women tended to agree to patriarchal notions. We did not observe many significant influences of the presence of third parties for religion questions. Our findings indicated that women usually reported less abortions in the presence of adults. They reported wanting more children when mother-in-law and other women were present, and less when mother was present. We observed lower proportions of regular alcohol use when other women were present. Regular voting was stated more in the presence of mother-in-law, less in mother’s presence.
We seek to illustrate that the presence of other persons during interviews can disproportionately influence the responses of the respondents and to show the significance of the influence.


Monitoring the Lifelong Learning in Europe – (In)Comparability of Indicators and Implications for European and Local Public Policies

Dr Magdalena Jelonek (Cracow University of Economics) - Presenting Author
Dr Konrad Turek (Jagiellonian University)
Dr Barbara Worek (Jagiellonian University)

European and national policies in many areas give high priority to lifelong learning (LLL) and this is followed by substantial investments. We must ask about the efficiency of these policies, however, and take into account that monitoring and evaluation of public policies require reliable indicators. LLL is a broad concept with vague borders, what causes difficulties in operationalisation and measurement. Policy makers and stakeholders responsible for interventions and financial allocations are often unaware of the methodological nuances that affect the monitored indicators. Evidence from different countries shows that even small changes in the questionnaire may affect LLL measures.
The aim of this presentation is to characterise the difficulties and limitations associated with LLL monitoring in Europe. This paper characterizes different approaches of European countries to monitoring lifelong learning, centralisation/ decentralisation of this process, lack of harmonisation and its consequences for the quality of indicators. We illustrate the problem with data from representative, large-scale survey - Human Capital Study conducted in Poland, from the first [2010-2014] and second [2017-] round. We analyse the effect of modifications in questions about LLL for the indicator value and response patterns. One of the primary goals of the study was to investigate a wide range of nonformal and informal learning activities. In 2017 the LLL questions were modified, what resulted in a significant increase in the general level of LLL activity. Comparison of the results from all waves allows assessing the size of the effect that can be attributed to change in the questionnaire. The large crossectional samples allow for detailed analysis, including estimation of the effect for different groups of respondents and sectors of the labour market. We compare alternative methods and approaches to the estimation. In conclusions, we discuss both the methodological aspects of LLL studies, e.g. quality and comparability of indicators, and practical implications of the results for public policies.


The Effect of Page and Question Characteristics on Page-Defocusing

Mr Tobias Baier (Darmstadt University of Technology) - Presenting Author
Ms Anke Metzler (Darmstadt University of Technology)
Mr Marek Fuchs (Darmstadt University of Technology)

Devices that respondents use for Web survey participation easily allow multitasking. Respondents can temporarily leave the survey page and switch to another window or browser tab. So far, research has examined the effect of multitasking on data quality (Kennedy 2010; Sendelbah et al. 2015). However, only few studies have determined causes of multitasking. Some findings indicate that multitasking is affected by the respondent’s age and the situational context (Zwarun and Hall 2014; Sendelbah 2015). This paper aims to extend the research on causes of multitasking and to assess how page and question characteristics are associated with multitasking. Understanding why respondents multitask helps to prevent its potentially detrimental effect on data quality in the future.
For analyses reported in this paper, two online surveys among panel members of the non-probability online panel of respondi (n=1,653; n=1,148) and an online survey among university applicants (n=1,125) were conducted in 2018. To measure multitasking the JavaScript tool SurveyFocus (Höhne & Schlosser 2017) was implement in these surveys and provides paradata on page-defocusing. Using the paradata provided by this tool, we are able to identify page and question characteristics that foster respondents to defocus from the survey and switch to another window or browser tab.
First results show that 13 to 24 percent of the respondents at least defocus on one page. Further analysis will include question type, sensitivity of question and position of question.


Never Confuse Answers with Understanding: Perceived Difficulty and the Impact on the Quality of Data Collected

Ms Sophie van der Valk (Trinity College Dublin)
Dr Eva Aizpurua (Trinity College Dublin) - Presenting Author
Professor Mary Rogan (Trinity College Dublin)

Answering survey questions requires substantial cognitive work from respondents. They must (1) interpret the intended meaning of the question, instructions, and response options, (2) retrieve relevant information from memory, (3) integrate the information into a judgment, and (4) map the judgment onto the response options. To reduce the cognitive effort, respondents sometimes skip one or more of these steps, engaging in satisficing behaviours (Tourangeau, Lance, & Kenneth, 2000). A number of studies have shown that satisficing occurs more often when the response task is difficult and that greater difficulties are associated with lower levels of education. We contribute to this body of research by examining how the perceived difficulty of the question-answer process affects data quality indicators in a special population of prisoners whose education levels are lower than those of the general public. Specifically, we hypothesize that greater difficulty will be associated with (a) higher item-non response rates, (b) greater acquiescent responses and (c) shorter answers to open-ended questions. In addition, we anticipate that greater difficulty will be related to more respondent problems including unfollowed instructions and inadequate answers. The implications of these findings for survey research with special populations are discussed.