ESRA 2019 Programme at a Glance


Cognition in Surveys 2

Session Organisers Dr Naomi Kamoen (Tilburg University)
Dr Bregje Holleman (Utrecht University)
Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences)
TimeTuesday 16th July, 14:00 - 15:30
Room D25

In recent years, various models describing the cognitive processes underlying question answering in standardized surveys have been proposed, such as the model by Tourangeau, Rips and Rasinski (2000). This model distinguishes four stages in question answering: (1) comprehension of the question, (2) retrieval of information, (3) deriving a judgement, and (4) formulating a response. In addition, there are dual-process models, such as the satisficing model proposed by Krosnick (1991). In this model, two groups of respondents are distinguished: those who satisfice, and try to do just enough to give a plausible answer versus those who optimize, and do their best to give a good answer.

Cognitive models such as the two described above, have many applications. For example, they help in understanding what is measured when administering surveys, and they provide a point of departure in explaining the wide range of method effects survey researchers observe. Also, cognitive theory in surveys is used by psychologists, linguists and other scholars to obtain a deeper understanding of, for example, language processing, the nature of attitudes, and memory.
In this session, many different methodologies are used to assess the effects of various survey design characteristics and to unravel the underlying cognitive mechanisms causing these effects.

Keywords: cognition; question-answering processes; satisificing, emotion

What’s Happening in our Brains? Cognitive Processes and Neuronal Correlation While Answering Questions.

Professor Martin Weichbold (University of Salzburg (Sociology)) - Presenting Author
Professor Dietmar Roehm (University of Salzburg (Neurolinguistics))
Professor Reinhard Bachleitner (University of Salzburg (Sociology))
Professor Wolfgang Aschauer (University of Salzburg (Sociology))

Cognitive models have become a constitutional part of survey theory and practice. Although they offer a useful understanding of cognitive processes during an interview, we have to ask whether these models are still valid when considering recent findings in neuroscience where advanced imaging methods measures today allow for more objective insights into real-time cognitive processing.
In an exploratory study 48 students answered an online survey consisting of 120 items. Half of the items were factual questions, the other attitudinal ones, referring to topics of everyday life (ecology, consumption, study, travel, politics, and art). A pretest evaluation was used to distinguish easy and cognitively challenging questions using subjective evaluation as well as response latency of the answers.
While behavioral studies can track down only the outcome of knowledge access/retrieval, we used online measures to track the processing and proceduralization of this knowledge. To catch the fine-grained temporal dynamics of question processing, we used eye-tracking and EEG. In order to gain better understanding of the neural substrates (i.e., metabolic activity) of question processing, we recorded functional near-infrared spectroscopy (fNIRS, a non-invasive optical imaging technique to measure cortical hemodynamic activities) data concurrently to EEG.
First results show differences for the distinguished types of questions, for instance an increase in oxygenated and a decrease in deoxygenated haemoglobin in several brain areas for difficult attitudinal questions compared to easy attitudinal and factual questions. Both effects indicate specific neuronal activity. This is in line with EEG results, where spectral power analysis in the EEG frequency domain showed a graded theta band event-related synchronization effect (strongest for difficult attitudinal questions). In our presentation, we want to show some selected results of our study and discuss the consequences of our findings.


The Influence of Context Effect and Visual Contextualization on the Retrieval of Temporarily vs. Chronically Accessible Information

Dr Katharina Meitinger (Utrecht University) - Presenting Author
Dr Tanja Kunz (GESIS Leibniz Institute for the Social Sciences)

Context effects induced by the content of previous questions can affect the cognitive response process. At the same time, respondent use the visual design of a question to “understand” the pragmatic question meaning (visual contextualization). Previous studies (Schwarz 1999) related context effects to different types of information that are retrieved from memory: chronically accessible (context independent) and temporarily accessible (temporarily accessible due to contextual influences) information (Schwarz & Bless 1992). If respondents retrieve temporarily accessible information, they potentially rely on information provided by the context.
In open-ended questions, respondents often use the visual design as an additional source of information. Previous research on list-style open-ended questions showed that increasing the number of text boxes can increase the number of themes mentioned (Keusch 2014). However, this might come at a price: Respondents might shift from chronically to temporarily accessible information when the number of boxes exceeds the number of topics the respondent had in mind. One manifestation of this process might be a higher prevalence of the content of previous questions in the responses to the current open-ended question with many text boxes.
This presentation reports results from an experiment implemented in a Web survey with 4,200 German respondents conducted in November 2018. The goal of the experiment was to disentangle the effect of visual contextualization from context effects in open-ended questions. Half of the respondents received a specific prime that was related to the open-ended question, the other half did not. In each split, we manipulated the number of text boxes (1/3/5/10 boxes). We hypothesize that a larger number of text boxes increases the number of themes mentioned (visual contextualization) but respondents receiving many text boxes will mention more temporarily accessible themes that were provided by the context (interaction of visual contextualization and frames). Preliminary results show support of both hypotheses.


Re-Examining the Left and Top Means First Heuristic Using Eye-Tracking Methodology

Dr Jan Karem Höhne (University of Mannheim) - Presenting Author
Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences)
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Ting Yan (Westat)

Web surveys are commonly based on self-administered modes using written language to convey information. This kind of language is usually accompanied by visual cues. Research has shown that the visual placement of response options can affect how respondents answer questions because they sometimes make use of interpretive heuristics. One such heuristic is called the "left and top means first" heuristic. It implies that the leftmost or top response option is seen as the first one in a conceptual sense. We replicate the experiment on the "order of response options" by Tourangeau, Couper, and Conrad (2004) and extend it by using eye-tracking methodology. Specifically, we investigate respondents' response behavior when the options do not follow a logical order – e.g., it depends, agree strongly, disagree strongly, agree, disagree. By recording respondents' eye movements, we are able to observe how they process the questions and response options to draw conclusions about their information processing. We conducted a web survey experiment (N = 132) in a lab setting with three groups: 1) response options were presented in a consistent order, 2) response options were presented in a mildly inconsistent order, and 3) response options were presented in a strongly inconsistent order. The statistical analyses reveal a higher fixation count and a longer fixation time on the response options in the conditions with an inconsistent order. In these conditions, we also found more gaze-switches between the response options. In addition, response options that are not presented in a logical order affect responses obtained. To conclude: These findings indicate that order discrepancies confuse respondents and increase the overall response effort. They also affect response distributions reducing data quality. Thus, we recommend presenting response options consistent with the left and to means first heuristic.


Subgroup Differences in the Cognitive Processing of Rating Scales: Two Experiments on Scale Polarity

Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Jan Karem Höhne (University of Mannheim)
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Ting Yan (Westat)

The polarity of rating scales (i.e., whether the underlying dimensions are unipolar or bipolar) can be expressed verbally (unipolar: not at all satisfied – very satisfied, bipolar: very dissatisfied – very satisfied) and/or in the form of numerical values with unipolar scales using only positive numbers (e.g., from 1 to 7) and bipolar scales using both negative and positive numbers (e.g., from -3 to +3). Given that verbal labels are sometimes ambiguous as to polarity, it might be advisable to include numeric values to signal the intended scale polarity to respondents. However, many studies have shown that when rating scales include negative values, respondents tend to avoid these and produce more positive answers than when they include only positive values. Moreover, research has shown that verbal and numerical labels are processed independently, and thus have separate effects that do not interact. This latter finding suggests that either verbal and numerical labels are not being checked for consistency by respondents, or that they do not appear inconsistent to them. Another possible explanation is that only some respondents pay attention to the consistency of the verbal and numerical labels while others do not. We conducted two studies to shed light on this issue: an online survey with 900 respondents and an eye-tracking study with 120 participants. In each study, participants were randomly assigned to three versions of a set of 7 favor/oppose (i.e., bipolar) items varying the numbering of the response scales: (1) no numbers (control condition), (2) bipolar numbers (consistent condition), and (3) unipolar numbers (inconsistent condition). In our analyses, we examine how different groups of respondents (e.g., with low/high need for cognition, low/high in conscientiousness) answer the different question versions and whether they check the verbal and numerical labels for consistency.