ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 11:00 - 12:30 Room: Q2 AUD3


Satisficing in Surveys: Theoretical and Methodological Developments 1

Chair Dr Joss Rossmann (GESIS - Leibniz Institute for the Social Sciences )
Coordinator 1Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)
Coordinator 2Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)

Session Details

Satisficing theory (Krosnick 1991, 1999) provides a framework for the analysis of respondents’ response behaviors in surveys and, accordingly, the quality of their responses. The theory basically distinguishes between three response strategies: First, optimizing refers to the complete and effortful execution of all four cognitive steps of the response process. That is, respondents have to interpret the question, retrieve relevant information from their memory, form a judgment based on the available information, and translate the judgment into a meaningful answer. Second, if the task of answering a question is difficult and respondents lack the necessary abilities or motivation to provide an accurate answer, they might decide to perform the steps of information retrieval and judgment less thoroughly to reduce their response efforts. Thus, weak satisficing results in merely satisfactory answers (e.g., selecting the first response option that seems acceptable). Third, under certain conditions respondents might simplify the response task even further by superficially interpreting questions and completely skipping the steps of information retrieval and judgment. Strong satisficing is indicated, among others, by providing random, nonsubstantive, or non-differentiated responses.

Since its introduction in survey methodology, the concept of satisficing has become one of the leading theoretical approaches in the examination and explanation of measurement error in surveys. With regard to its increasing popularity, we particularly welcome submissions that present advancements to the theory, introduce new methods to measure satisficing, show how satisficing theory can be applied to better understand the occurrence of observable response patterns, or present practical applications in question or survey design that aim at reducing satisficing in surveys.

Contributions may cover but are not limited to the following research topics:
- Theoretical advancements of satisficing theory
- Innovative measurement approaches (e.g., instructional manipulation checks, use of response latencies or other paradata)
- Consequences of satisficing (e.g., rounding/heaping, nonsubstantive answers to open-ended questions)
- Effects of survey mode on satisficing (e.g., findings from mixed-mode studies)
- Effects of the sampling methodology and sample characteristics on satisficing (e.g., comparisons of opt-in and probability-based online panels)
- Experimental evidence on how the occurrence of satisficing can be reduced (e.g., innovations in survey, question, or response scale design).

Paper Details

1. Satisficing in surveys: A systematic review of the literature
Professor CAROLINE ROBERTS (University of Lausanne)
Dr Emily Gilbert (Centre for Longitudinal Studies, Institute of Education)
Professor Nick Allum (University of Essex)
Ms Léïla Eisner (University of Lausanne)

In 1987, Krosnick and Alwin published an article about response order effects in survey measurement in which they presented a cognitive theory for why some respondents might exhibit such effects based on Herbert Simon’s (1956) concept of ‘satisficing’. The approach was later elaborated in an article by Krosnick (1991) to account for a range of other response effects often observed in attitudinal data, attributing them to respondents shortcutting cognitive processes necessary for reporting answers accurately (Tourangeau, 1984). In the past 25 years, Krosnick’s article has become one of the most frequently cited in the field of survey methodology, and satisficing theory has become a popular framework for investigating differences in response quality across different survey designs and different respondent subgroups. Despite its popularity, however, there has been considerable variation in the methods used in applications of the theory, and comparatively few studies that have explicitly aimed to test it. Furthermore, relatively mixed empirical evidence suggests that the theory may hold true for certain types of response effect in some settings and not for others.

In this paper, we present the final results of a systematic review of published research that has drawn on the satisficing concept, appearing in English-language journals between 1987 and 2015. We first presented preliminary results of this study at ESRA 2011, but have since extended and finalised our review, allowing us to present a comprehensive overview of 25 years of satisficing research, summarising the empirical evidence that has been published to date, and developing ideas for future research directions.

We use a content analytic approach to code the methodological features of the research that has been undertaken, the extent to which it addresses the theory as developed by Krosnick (1991), and provides results consistent with the theory’s main tenets. In particular, we identify the dependent and independent variables analysed in each study (indicators of strong and weak satisficing, and factors purported to influence the prevalence of satisficing (respondent ability, respondent motivation, and task difficulty)), and record the main and multiplicative effects reported. Our aim is to draw conclusions about the validity of the theory in relation to different types of response effect, to provide a resource for researchers interested in conducting meta-analyses of the published research, and to stimulate future refinements of our understanding of response strategies respondents use to cope with the cognitive demands of answering survey questionnaires (ibid.).


2. Satisficing in online panels
Ms Carina Cornesse (Mannheim University and GESIS)
Professor Annelies Blom (Mannheim University)

The ongoing debate about the general quality of nonprobability online panels mainly discusses whether or not these panels have representative sets of respondents. While the number of publications on nonprobability panel representativeness is increasing, less attention has so far been paid to measurement error in probability versus nonprobability panels.
In our paper, we investigate whether there are differences in satisficing across online panels using three indicators that operationalize survey satisficing (item nonresponse and non-substantive answers, straight-lining in grids, and mid-point selection in a visual design experiment). These indicators are included in a questionnaire module that is implemented across 9 online panels in Germany. One of these panels is the German Internet Panel (GIP). The other online panels are commercial panels. The 9 online panels differ in their sampling and recruitment methods.
Preliminary findings suggest that the online panels vary in the amount and type of satisficing, especially with regard to straight-lining in grids. In addition, some of the variance in straight-lining across the online panels can be explained by probability versus nonprobability sampling.


3. Do question order effects generalize across cultures?
Professor Tobias Stark (Utrecht University)
Dr Henning Silber (GESIS)
Professor Jon Krosnick (Stanford University)
Professor Annelies Blom (University of Mannheim)

This research investigated whether question order effects varied across 12 countries and whether potential differences could be explained by country-specific variation in survey satisficing. In particular, we tested whether findings of question order experiments, reported in the U.S. three decades ago, would replicate in U.S. online surveys today and whether these findings would generalize to 11 other countries (Canada, Denmark, Germany, Iceland, Japan, the Netherlands, Norway, Portugal, Sweden, Taiwan, and the United Kingdom, total N = 25,607). One question order effect involved the norm of evenhandedness and the other involved a perceptual contrast between two questions about abortion. We proposed that question order effects due to the norm of evenhandedness only occur if respondents prefer the entities mentioned in one question over the entity mentioned in the other question (in our case businesses and labor unions). This condition was met in the U.S. and 6 other countries. The question order effect replicated in the U.S. and generalized to all but one country (Iceland) in which the necessary condition was met. Unexpectedly, the effect was not moderated by respondents' education (a proxy for cognitive ability) as predicted by satisficing theory.
We further proposed that question order effects due to perceptual contrast should only occur if respondents consider one of the two question to give a more compelling reason than the other (in our case for abortion). This condition was met in all but the three Scandinavian countries in our sample. Also this question order effect replicated in the United States. Interestingly, it generalized to all countries, not just the ones that met the necessary condition. This finding suggests that varying cultural norms between countries can make it difficult for researchers to detect response effects caused by a perceptual contrast. With respect to the cognitive ability, there was no evidence that this question order effect was moderated by education either, letting us to conclude that satisficing does not seem to underlie these two question order effects.


4. Within-Respondent Variation in Satisficing across Waves of a Panel Survey
Dr Joss Roßmann (GESIS - Leibniz Institute for the Social Sciences)

Satisficing response behavior is a severe threat to data quality of surveys. Yet, to date no study has systematically explored the stability of satisficing in repeated interviews of the same respondents over time. Consequentially, knowledge on whether satisficing is more strongly affected by time-varying or time-invariant characteristics of the respondents and the interview situation is sparse. Gaining novel insights into these issues is particularly important for survey methodologists and practitioners in the field of panel research because the effectiveness of different approaches to cope with satisficing depends on the impact of time-varying and time-invariant characteristics on the respondents’ response behavior over time. Thus, the present study set out to answer two related research questions. First, how high is the level of within-respondent variation in satisficing in repeated interviews of the same respondents? And, second, to what extent can we attribute observed within-respondent variation and stability to time-varying and time-invariant characteristics of the respondents and the interview situation?
To address these questions, the present study used data of three waves of a web-based panel survey on politics and elections, which was conducted during the campaign to the German Bundestag election in 2013. For each wave of the panel, the respondents (n = 4.765) were classified as either optimizers or satisficers using latent class analysis with five common indicators of satisficing response behavior (i.e., speeding, straightlining, don’t know answers, mid-point responses, and nonsubstantive answers to a demanding open-ended attitude question).
The results showed that between 10.2% and 10.9% of the respondents pursued satisficing as response strategy across the waves of the panel. Yet, within-respondent variation in satisficing over the three waves of the panel was rather limited. While within-respondent stability can be attributed to time-invariant characteristics of the respondents (e.g., cognitive sophistication), the observed variation suggested that time-varying characteristics of the respondents (e.g., motivation) and the interview situation have effects on the response strategy of survey participants. With regard to these findings, we applied a fixed effects regression model to study the effects of time-varying characteristics on within-respondent variation in their response strategy. The results showed that particularly changes in the respondents’ motivation explained within-person variation in the response strategy. Furthermore, the results of a hybrid regression model provided support for the notion that both time-invariant as well as time-varying characteristics of the respondents and the interview situation have effects on satisficing in a panel survey.
In conclusion, our study provides evidence that satisficing is not a fully time-invariant and trait-like characteristic of respondents. Rather satisficing should be perceived as a response strategy, which is affected by both time-varying as well as time-invariant characteristics of respondents and the interview situation. Accordingly, we suggest that approaches to cope with satisficing should focus on motivating respondents and reducing the response burden.