ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 09:00 - 10:30 Room: Q4 ANF2


Cognition in surveys

Chair Dr Bregje Holleman (Utrecht University )
Coordinator 1Dr Naomi Kamoen (Tilburg University)

Session Details

Cognitive research in surveys covers a wide range of approaches. In recent years, various models describing the cognitive processes underlying question answering in standardized surveys have been proposed. A lot of research is guided by the model of question answering by Tourangeau, Rips and Rasinski (2000). This model distinguishes four stages in question answering: (1) comprehension of the question, (2) retrieval of information, (3) deriving a judgement, and (4) formulating a response. In addition, there are dual-process models, such as the satisficing model proposed by Krosnick (1991). In this model, two groups of respondents are distinguished: those who satisfice, and try to do just enough to give a plausible answer, versus those who optimize, and do their best to give a good answer.

Cognitive models such as the two described above, have many applications. For example, they help in understanding what is measured when administering surveys, and they provide a point of departure in explaining the wide range of method effects survey researchers observe. Also, cognitive theory in surveys is used by psychologists, linguists and other scholars to obtain a deeper understanding of, for example, language processing, the nature of attitudes, and memory.

In this session, we welcome studies in which the cognitive processes underlying question answering are addressed. This can for example be done by using qualitative research methods, such as cognitive interviewing, or by applying unobtrusive research methods such as reaction times or eye-tracking. We like to stress that we welcome work on cognitive processes in a broad range of survey contexts: the cognitions related to factual or behavioral questions as well as cognitive processes underlying the answers to attitude questions. This can be addressed in a large variety of survey types and administration modes - including online political attitude surveys called Voting Advice Applications.

Paper Details

1. Explaining systematic measurement errors using cognitive-process models of response behavior and the attitude towards surveys
Mr Christoph Giehl ( Technical University Kaiserslautern)
Professor Jochen Mayerl ( Technical University Kaiserslautern)

Cognitive dual-process models of response behavior distinguish between two groups of respondents: those giving answers based on simple decision heuristics and automatic-spontaneous cognitive processes, which is typically associated with short response latencies, and those giving answers based on deliberative thoughts, which is associated with long response latencies. Empirical studies show that both groups are susceptible to different types of response effects such as acquiescence effects for quick responders or contrast effects of question order for slow responders.

For fast respondents’, the chronic attitude accessibility is assumed to be a moderator of the attitude-response process: if the accessibility is high, respondents’ will answer based on their attitudes. If it is low, they will answer bases on simple decision heuristics or situational cues (Fazio 1990). We assume that those respondents who give automatic-spontaneous answers without chronic attitude accessibility are those who will most likely be affected by response effects which demand lower levels of elaboration (like acquiescence effects).

Furthermore, respondents’ can be distinguished based on their general attitude towards sur-veys, which leads to a specific role in surveys. Those roles can either be cooperative, which means respondents’ try to answer every question as true as possible, or the role can be con-forming, which means respondents’ are using cost-benefit-considerations when answering questions, which often leads to a biased answer (Stocké 2004). Since those considerations presuppose a higher level of elaboration, we suppose that the general attitude towards surveys is a moderator only for slow responses. Therefore, response effects which demand higher lev-els of elaboration (like the contrast effect of question order) should be observable especially for slow responders with a negative general attitude towards surveys.

Thus, we propose that there is a general association between specific types of response effects, the response latencies and the respondents’ attitudes in surveys, based on an extended dual-process model of response behavior. A respondent’s degree of answer elaboration, the general attitude towards surveys and the degree of chronic accessibility of the research case are all predictors for specific types of response effects.

To examine this assumption, we investigate the link between the general attitude towards sur-veys, the attitude accessibility, the level of a responders answer elaboration and the occurance of response effects (in particular the acquiescence effect and the assimilation and contrast ef-fect of question order) to explain method effects according to dual-process models. For this examination, we will use the data of 1.) a paper and pencil evaluation project conducted 2014 to 2016; 2.) a web survey among students in 2016 and 3.) a German longitudinal mixed mode study, the GESIS panel.

Sources:
Fazio, R.H. (1990): Multiple Processes by which Attitudes guide Behavior: the MODE Model as an integrative framework. Advances in Experimental Social Psychology 23, 75-109.
Stocké, V. (2004): Entstehungsbedingungen von Antwortverzerrungen durch soziale Er-wünschtheit. Ein Vergleich der Prognosen der Rational-Choice Theorie und des Modells der Frame-Selektion. Zeitschrift für Soziologie 33, 303–320.


2. Attitude strength as an explanation for wording effects in political opinion questions
Dr Bregje Holleman (assistant professor Utrecht University)
Dr Naomi Kamoen (assistant professor Tilburg University)

Survey methodological research shows over and again that contrastive wordings in attitude questions affect the answers obtained. Rugg (1940) was the first to establish that a question about freedom of speech phrased with the verb ‘allow’ elicited more ‘no’-answers compared to the number of ‘yes’-answers to the opposite question with ‘forbid’. Hence, respondents’ evaluations of free speech seemed more positive when a negative question had been asked.
Explanations have been focusing on a difference in connotations of positive and negative wordings (Schuman & Presser 1981; Holleman 2000). Another type of explanation for these wording effects can be derived from dual-route theories of information processing. Such theories (e.g., the ELM by Petty & Cacioppo or the satisficing model by Krosnick) proposed that people with strong attitudes, tend to process information about that issue more deeply, whereas people with weak attitudes tend to perform shallow or heuristic processing. By doing so, this latter group will be more susceptible to superficial characteristics of the way the information is conveyed (e.g., wording, or source credibility).
While theoretically plausible, empirical evidence in extant survey research is very heterogeneous: often the wording effect for contrastive questions can be explained by (indicators of) attitude strength, but equally often attitude strength is found unrelated to the asymmetry. These heterogeneous findings might be due to differences in the operationalization of attitude strength.
In the current study, we tested the occurrence of wording effects for contrastive attitude questions once more for respondents holding strong and weak attitudes, in the context of political attitude questions in a Voting Advice Application. We manipulated the wording of 14 questions in one survey, which showed an overall wording effect in the direction already established by Rugg (1940). The wording effects were small compared to previous studies, which might be explained by the fact that a VAA is an opt-in survey with relatively highly motivated users.
We proceeded by investigating the role of attitude strength as a cause for the asymmetries found. Operationalizing attitude strength by measuring political interest showed no relation to the asymmetries. Following on to research in political decision making we made an alternative operationalization in terms of respondents’ degree of political sophistication. In our study, variation in users’ level of political sophistication was systematically related to the size or occurrence of wording effects. The higher the political sophistication, the smaller the overall wording effect - and the group of VAA users with the highest levels of political sophistication were not susceptible to the effects of question wording at all. This seems support for an attitude strength explanation after all, and also for more context specific measures of motivation and strength than used previously.


3. Comparing the Performance of Agree/Disagree and Item-Specific Questions over PCs and Smartphones
Mr Jan Karem Höhne (University of Göttingen)
Dr Melanie Revilla (RECSM-Universitat Pompeu Fabra)
Dr Timo Lenzner (GESIS – Leibniz Institute for the Social Sciences)

In quantitative social research the use of agree/disagree (A/D) questions (i.e. response categories are based on an agreement continuum), is a common and very popular methodological technique to measure attitudes and opinions of respondents. For instance, this question format is frequently used in the Eurobarometer, the ANES and the ISSP. Theoretical considerations, however, suggest that A/D questions require an effortful and intricate cognitive information processing. For this reason, a variety of survey scientists recommend the use of item-specific (IS) questions (i.e. response categories address the underlying dimension of attitudes and opinions directly) since they seem to be less burdensome. In the current study, we investigate cognitive effort (by means of response times and answer changes) and response quality (by means of survey satisficing indicators) associated with A/D and IS questions over PCs and smartphones. To investigate cognitive effort and response quality, we collected data using the Netquest access panel from September to October 2016 and applied a split-ballot design with four experimental groups defined by device type (PC vs. smartphone) and question format (A/D vs. IS) resulting in a 2-by-2 research design. The first and second group contained n = 300 respondents answering A/D or IS questions on PCs, respectively. The third and fourth group, in contrast, contained n = 400 respondents answering A/D or IS questions on smartphones, respectively. Although the data analysis is still imminent, we expect – against current theoretical considerations – to observe longer response times and more answer changes for the IS than for the A/D question format, irrespective of the device type. In addition, we also expect to observe higher response quality for IS than A/D questions.


4. I don't get it. Response difficulties in answering political attitude statements in Voting Advice Applications.
Dr Naomi Kamoen (Tilburg University)
Dr Bregje Holleman (Utrecht University)

What question characteristics are related to comprehension problems in political attitude questions? And what type of answering behaviour do people expose when they do not understand the question? We investigated these issues in the context of Voting Advice Applications (VAAs). These online tools provide users with a voting advice based on their answers to a set of about 30 political attitude questions. VAAs have become a central source of political information (see Garzia & Marschall, 2012) and research shows that the VAA voting advice affects the content of the vote cast (e.g., Andreadis & Wall, 2014). Therefore, it is of utmost importance to investigate to what extent VAA users understand the questions that lead to the voting advice and how they respond in case of comprehension difficulties.

Study 1 consists of cognitive interviews with 60 users, each one filling out 30 VAA statements prior to 2014 municipal elections in the Dutch Municipality Utrecht. The verbalizations of these respondents were recorded and categorized for several types of comprehension problems by two independent coders (Kappa/Kappa max between 0.58 and 0.98). Results show that VAA users encounter a comprehension problem for – on average - about 1 in 5 questions. About two-thirds of these are related to the semantic meaning of the question, covering difficulties with political jargon (e.g., 'dog tax'), or geographical terms (e.g., a specific street in Utrecht). One-third of the comprehension problems are related to the pragmatic comprehension-about the question. In these cases, the respondent does understand the literal meaning of all concepts in the question, but lacks contextual knowledge for providing a well-considered answer. Such pragmatic comprehension problems are often triggered by vague quantifying term in the question (e.g., taxes on housing should be raised), which make the users realize they lack knowledge about the current state of affairs ('How high is that tax now?'). In case of comprehension problems, VAA users often assume a certain question meaning, and hardly ever proceed by looking for information on the web. Nevertheless, a large majority of the respondents provides a substantive answer (often the middle option).

In Study 2, we investigated whether the question characteristics leading to comprehension difficulties in Study 1, lead to more neutral and no opinion-answers when statistically analyzed across a larger set of questions in a larger set of VAAs. We performed statistical analyses of all answers provided by 357,858 VAA respondents who used one of the 34 different municipal VAAs during the Dutch municipality elections in 2014. Results in Study 2 confirm that political jargon, geographical locations and vague quantifying terms are related to more neutral and/or no opinion-answers. Interestingly, there seems to be a relation between the type of comprehension problem and the type of answer provided: semantic meaning problems often result in no opinion answers, whereas pragmatic problems are related to neutral responses.