ESRA 2019 Draft Programme at a Glance
Cognition in surveys 2
|Session Organisers|| Dr Naomi Kamoen (Tilburg University)
Dr Bregje Holleman (Utrecht University)
|Time||Tuesday 16th July, 14:00 - 15:30|
In recent years, various models describing the cognitive processes underlying question answering in standardized surveys have been proposed, such as the model by Tourangeau, Rips and Rasinski (2000). This model distinguishes four stages in question answering: (1) comprehension of the question, (2) retrieval of information, (3) deriving a judgement, and (4) formulating a response. In addition, there are dual-process models, such as the satisficing model proposed by Krosnick (1991). In this model, two groups of respondents are distinguished: those who satisfice, and try to do just enough to give a plausible answer versus those who optimize, and do their best to give a good answer.
Cognitive models such as the two described above, have many applications. For example, they help in understanding what is measured when administering surveys, and they provide a point of departure in explaining the wide range of method effects survey researchers observe. Also, cognitive theory in surveys is used by psychologists, linguists and other scholars to obtain a deeper understanding of, for example, language processing, the nature of attitudes, and memory.
In this session, many different methodologies are used to assess the effects of various survey design characteristics and to unravel the underlying cognitive mechanisms causing these effects.
Keywords: cognition; question-answering processes; satisificing, emotion
Fictitious Issues in Surveys: Using Pseudo-Opinions to Study the Causes of Misreporting and Invalid Data
Mr Justus Junkermann (Johannes Gutenberg University Mainz) - Presenting Author
Dr Felix Wolter (Johannes Gutenberg University Mainz)
Professor Jochen Mayerl (Technical University Chemnitz)
Mr Hernik Andersen (Technical University Chemnitz)
Literature has repeatedly shown that respondents often give substantive answers to entirely fabricated issues even though they cannot have opinions on them (e.g. Bishop et al. 1980, Bishop, Tuchfarber & Oldendick 1986, Schuman & Presser 1981, Sturgis & Smith 2010). This casts doubts on the validity of responses on topics that truly exist but are perhaps not especially salient (Bishop et al. 1980). The paper investigates the prevalence and determinants of “pseudo-opinions” or “nonattitudes” to fictitious issues (FI). Empirically investigating answers to FI also allows us to study the question-answer-process with respect to the causes and mechanisms that create misreporting and invalid survey data.
In our paper, we concentrate on cognitive mechanisms of misreporting on FI, namely, the framing of the survey questions and the mode of information processing (automatic-spontaneous versus reflecting-calculating). The theoretical basis are dual-process theories and the cognitive model of answering survey questions (Chaiken & Trope 1999, Esser 2010, Tourangeau, Rips & Rasinski 2000). The data stem from a nationwide CATI survey (N=1,250) in Germany in which 14 opinion questions were asked about organizations, of which six were non-existent. The survey featured a 2x2 experimental design in which (1) a speed vs. accuracy instruction before the questions was crossed with (2) whether an explicit “don’t know” answer category was offered or not. An active measurement of response latencies for each question was used to operationalize the mode of information processing. Further, we investigate the extent to which social desirability relates to expressions of pseudo-opinions. We report results on the various relationships between question stimuli, response latencies, and answering behavior while incorporating other possible determinants such as socio-demographic information.
The influence of context effect and visual contextualization on the retrieval of temporarily vs. chronically accessible information
Dr Katharina Meitinger (Utrecht University) - Presenting Author
Dr Tanja Kunz (GESIS Leibniz Institute for the Social Sciences)
Context effects induced by the content of previous questions can affect the cognitive response process. At the same time, respondent use the visual design of a question to “understand” the pragmatic question meaning (visual contextualization). Previous studies (Schwarz 1999) related context effects to different types of information that are retrieved from memory: chronically accessible (context independent) and temporarily accessible (temporarily accessible due to contextual influences) information (Schwarz & Bless 1992). If respondents retrieve temporarily accessible information, they potentially rely on information provided by the context.
In open-ended questions, respondents often use the visual design as an additional source of information. Previous research on list-style open-ended questions showed that increasing the number of text boxes can increase the number of themes mentioned (Keusch 2014). However, this might come at a price: Respondents might shift from chronically to temporarily accessible information when the number of boxes exceeds the number of topics the respondent had in mind. One manifestation of this process might be a higher prevalence of the content of previous questions in the responses to the current open-ended question with many text boxes.
This presentation reports results from an experiment implemented in a Web survey with 4,200 German respondents conducted in November 2018. The goal of the experiment was to disentangle the effect of visual contextualization from context effects in open-ended questions. Half of the respondents received a specific prime that was related to the open-ended question, the other half did not. In each split, we manipulated the number of text boxes (1/3/5/10 boxes). We hypothesize that a larger number of text boxes increases the number of themes mentioned (visual contextualization) but respondents receiving many text boxes will mention more temporarily accessible themes that were provided by the context (interaction of visual contextualization and frames). Preliminary results show support of both hypotheses.
Re-examining the left and top means first heuristic using eye-tracking methodology
Dr Jan Karem Höhne (University of Mannheim) - Presenting Author
Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences)
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Ting Yan (Westat)
Web surveys are commonly based on self-administered modes using written language to convey information. This kind of language is usually accompanied by visual cues. Research has shown that the visual placement of response options can affect how respondents answer questions because they sometimes make use of interpretive heuristics. One such heuristic is called the "left and top means first" heuristic. It implies that the leftmost or top response option is seen as the first one in a conceptual sense. We replicate the experiment on the "order of response options" by Tourangeau, Couper, and Conrad (2004) and extend it by using eye-tracking methodology. Specifically, we investigate respondents' response behavior when the options do not follow a logical order – e.g., it depends, agree strongly, disagree strongly, agree, disagree. By recording respondents' eye movements, we are able to observe how they process the questions and response options to draw conclusions about their information processing. We conducted a web survey experiment (N = 132) in a lab setting with three groups: 1) response options were presented in a consistent order, 2) response options were presented in a mildly inconsistent order, and 3) response options were presented in a strongly inconsistent order. The statistical analyses reveal a higher fixation count and a longer fixation time on the response options in the conditions with an inconsistent order. In these conditions, we also found more gaze-switches between the response options. In addition, response options that are not presented in a logical order affect responses obtained. To conclude: These findings indicate that order discrepancies confuse respondents and increase the overall response effort. They also affect response distributions reducing data quality. Thus, we recommend presenting response options consistent with the left and to means first heuristic.
Subgroup differences in the cognitive processing of rating scales: Two experiments on scale polarity
Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Jan Karem Höhne (University of Mannheim)
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Ting Yan (Westat)
The polarity of rating scales (i.e., whether the underlying dimensions are unipolar or bipolar) can be expressed verbally (unipolar: not at all satisfied – very satisfied, bipolar: very dissatisfied – very satisfied) and/or in the form of numerical values with unipolar scales using only positive numbers (e.g., from 1 to 7) and bipolar scales using both negative and positive numbers (e.g., from -3 to +3). Given that verbal labels are sometimes ambiguous as to polarity, it might be advisable to include numeric values to signal the intended scale polarity to respondents. However, many studies have shown that when rating scales include negative values, respondents tend to avoid these and produce more positive answers than when they include only positive values. Moreover, research has shown that verbal and numerical labels are processed independently, and thus have separate effects that do not interact. This latter finding suggests that either verbal and numerical labels are not being checked for consistency by respondents, or that they do not appear inconsistent to them. Another possible explanation is that only some respondents pay attention to the consistency of the verbal and numerical labels while others do not. We conducted two studies to shed light on this issue: an online survey with 900 respondents and an eye-tracking study with 120 participants. In each study, participants were randomly assigned to three versions of a set of 7 favor/oppose (i.e., bipolar) items varying the numbering of the response scales: (1) no numbers (control condition), (2) bipolar numbers (consistent condition), and (3) unipolar numbers (inconsistent condition). In our analyses, we examine how different groups of respondents (e.g., with low/high need for cognition, low/high in conscientiousness) answer the different question versions and whether they check the verbal and numerical labels for consistency.
Am I good with a computer? Self-descriptive vs objective measures of computer skills in the labor market research in Poland
Mr Krzysztof Kasparek (Jagiellonian University) - Presenting Author
Dr Szymon Czarnik (Jagiellonian University)
Dr Marcin Kocór (Jagiellonian University)
Dr Maciej Koniewski (Jagiellonian University)
One of the key elements of the labor market research is skills measurement. In spite of complex objective tests being recommended as the most reliable and valid ones, they bring a vast number of challenges for labor market population surveys. The widest applied solution for this issue become self-descriptive questionnaires for skills assessment. This approach is not free from problems associated with cognitive biases, such as social desirability bias, low-self-esteem or Dunning–Kruger effect.
One of the possible solutions for this challenge was proposed in The Study of Human Capital, one of the largest labor market surveys in Poland (about 92,500 respondents aged 18-69 till date). Besides the large set of self-descriptive skills measures, we introduced five items Short Test for Computer Skills (STCS). The tool met criteria for validity and reliability.
The conducted analysis led to identifying the groups of respondents with adequate and dubious self-assessment skills. We will discuss the characteristics of these groups, and share good practices for using similar survey tools.