ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Cognition in Surveys 1

Session Organisers Dr Naomi Kamoen (Tilburg University)
Dr Bregje Holleman (Utrecht University)
Dr Jennifer Sinibaldi (National Center for Science and Engineering Statistics (NCSES))
TimeTuesday 16th July, 11:00 - 12:30
Room D25

In recent years, various models describing the cognitive processes underlying question answering in standardized surveys have been proposed, such as the model by Tourangeau, Rips and Rasinski (2000). This model distinguishes four stages in question answering: (1) comprehension of the question, (2) retrieval of information, (3) deriving a judgement, and (4) formulating a response. In addition, there are dual-process models, such as the satisficing model proposed by Krosnick (1991). In this model, two groups of respondents are distinguished: those who satisfice, and try to do just enough to give a plausible answer versus those who optimize, and do their best to give a good answer.


Cognitive models such as the two described above, have many applications. For example, they help in understanding what is measured when administering surveys, and they provide a point of departure in explaining the wide range of method effects survey researchers observe. Also, cognitive theory in surveys is used by psychologists, linguists and other scholars to obtain a deeper understanding of, for example, language processing, the nature of attitudes, and memory.
In this first session of Cognition in Surveys, cognition is understood to be 'cold cognition' (i.e., beliefs and reasoning) as well as 'hot cognition' (i.e., emotions), which are also related to personality traits.

Keywords: cognition; question-answering processes; satisificing, emotion

Am I Good with a Computer? Self-Descriptive vs Objective Measures of Computer Skills in the Labour Market Research in Poland

Mr Krzysztof Kasparek (Jagiellonian University) - Presenting Author
Dr Szymon Czarnik (Jagiellonian University)
Dr Marcin Kocór (Jagiellonian University)
Dr Maciej Koniewski (Jagiellonian University)

One of the key elements of the labor market research is skills measurement. In spite of complex objective tests being recommended as the most reliable and valid ones, they bring a vast number of challenges for labor market population surveys. The widest applied solution for this issue become self-descriptive questionnaires for skills assessment. This approach is not free from problems associated with cognitive biases, such as social desirability bias, low-self-esteem or Dunning–Kruger effect.

One of the possible solutions for this challenge was proposed in The Study of Human Capital, one of the largest labor market surveys in Poland (about 92,500 respondents aged 18-69 till date). Besides the large set of self-descriptive skills measures, we introduced five items Short Test for Computer Skills (STCS). The tool met criteria for validity and reliability.

The conducted analysis led to identifying the groups of respondents with adequate and dubious self-assessment skills. We will discuss the characteristics of these groups, and share good practices for using similar survey tools.


Emotion Regulation and Survey Response Quality

Professor Lonna Atkeson (University of New Mexico) - Presenting Author
Professor Cherie Maestas (University of North Carolina Charlotte)
Professor Sara Levens (University of North Carolina Charlotte)

How do emotional motivations influence survey response patterns? Researchers have considered how attentiveness influences the quality of responses, but have not examined underlying factors that might contribute to attentiveness. Specifically, we consider how emotion and emotion regulation influences survey response. Surveys are often tasked with obtaining opinions about emotion provoking individuals and events as well as eliciting information about respondents’ emotions about such targets. We draw upon regulation theory to consider how different emotion regulation strategies used by individuals to cope with upsetting events influence survey responses. For example, the emotion regulation strategy “avoidance” leads individuals to avoid or ignore upsetting stimuli, which could lead to survey inattentiveness. Similarly, the emotion regulation strategy of cognitive reappraisal may encourage careful thought and greater attentiveness to questions and answer scales. As a result, reappraisers may appear to be more engaged and thoughtful citizens. We consider these processes and their influence on survey response in a national three wave panel survey that collected data after 3 different horrific shootings in the United States in 2017 and 2018, one a partisan motivated shooting of a member of Congress, and two mass shootings (Las Vegas, NV and Parkland, FL). We collected survey measures of general emotion regulation habits using a well-validated psychological scale and developed novel event specific emotion regulation scales. We use both to examine their relationship to survey attentiveness including unit non-response, straight-lining, moderating opinions, and open-ended responses. Because of the panel nature of our design, we can examine this question both cross-sectionally and over time as respondents both maintain and alter their moods.


Am I Being Neurotic? Personality as a Predictor of Survey Response Styles

Professor Patrick Sturgis (University of Southampton) - Presenting Author
Professor Michael F. Schober (The New School for Social Research)
Professor Ian Brunton-Smith (University of Surrey)

Download presentation

Survey researchers have long known that some respondents provide poorer quality responses to questionnaires than others, as evidenced through empirical indicators like rates of item nonresponse, Don’t Knows, mid-points, and a lack of differentiation across adjacent questions. The theory of survey satisficing (Krosnick, 1991) has been advanced to explain individual differences in the propensity to manifest these kinds of sub-optimal responses: respondents pursue strategies to minimise the cognitive costs of answering survey questions and, in doing so, provide acceptable rather than optimal answers. Furthermore, the propensity to satisfice can be explained by a function of three key features of the response context: the difficulty of the task and the ability and motivation of the respondent. In short, satisficing is more likely when a question is difficult to answer and the respondent is of lower cognitive ability and weakly motivated (e.g., Roberts and Allum, 2018). In this paper, we explore a potentially important gap in our understanding of the causes of satisficing response-styles: personality. We use data from wave 3 of the UK Household Longitudinal Survey (UKHLS), collected 2011-2013, to assess the extent to which individual differences on the Big Five personality inventory (measured using the 15 item BFI-S scale) predict satisficing response styles. The UKHLS also includes measures of motivation (political engagement) and a battery of cognitive ability measures, thus allowing comparison of the relative effects of motivation, cognitive ability, and personality. We find large and significant effects of personality on the propensity to exhibit satisficing response styles. Mostly, these effects accord with theoretical expectation. For example, satisficing response styles are less prevalent amongst respondents scoring high on the Conscientiousness dimension. For a majority of response-style indicators considered, the personality variables are of greater explanatory power than the indicators of motivation and cognitive ability.


Some Evidence on the Influence of Context and Cultural Factors to Extend the Question-and-Answer Model for Web Surveys

Professor José Luis Padilla García (Universidad de Granada) - Presenting Author
Miss Dörte Naber (Universidad de Granada, Universität Osnabrück)
Dr Isabel Benítez (Universidad Loyola Andalucía (Sevilla))

Since the 1980s, there has been a huge amount of research on cognitive processes to understand the processes underlying the question-and-answer process when answering survey questions. To date, the so-called “Question-and-Answer” process, originally introduced by Tourangeau (1984, 2018), has been used most prominently in literature. However, extensions of the original model as well as new models have been proposed, such as the extended model of Schwarz and Oyserman (2001) and the ImpExp model of Shulruf, Hattie & Dixon (2008). In our presentation, we will first review briefly existing models of the question-and-answer process and second, analyze and integrate current theories and findings of survey research with the aim to propose a more comprehensive model. In particular, we will use results of an ongoing research project to extend the knowledge about the question-and-answer process in the context of web surveys. We resort to Web Probing (WP) qualitative evidence together with responses collected of 1,000 participants (500 in Germany and 500 in Spain) to single and multi-items of the 8th European Social Survey Round within a mixed-method design. In this session, our main focus will be on the social influence effect (the “ecology” of the web survey) that is often missed by the dominant versions of the question-and-answer cognitive models. Therefore, we will present and discuss on indicators for the social influence as well as selected context factors and cross-cultural factors that could play a crucial role for the question-and-answer process for web surveys. Doing so, we aim to contribute to a more profound comprehension of the answering process especially in web surveys and furthermore, to understand and minimize the effects of related response biases and cross-cultural factors that can undermine data quality.