ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 16:00 - 17:30 Room: Q2 AUD3


Satisficing in Surveys: Theoretical and Methodological Developments 3

Chair Dr Joss Rossmann (GESIS - Leibniz Institute for the Social Sciences )
Coordinator 1Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)
Coordinator 2Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)

Session Details

Satisficing theory (Krosnick 1991, 1999) provides a framework for the analysis of respondents’ response behaviors in surveys and, accordingly, the quality of their responses. The theory basically distinguishes between three response strategies: First, optimizing refers to the complete and effortful execution of all four cognitive steps of the response process. That is, respondents have to interpret the question, retrieve relevant information from their memory, form a judgment based on the available information, and translate the judgment into a meaningful answer. Second, if the task of answering a question is difficult and respondents lack the necessary abilities or motivation to provide an accurate answer, they might decide to perform the steps of information retrieval and judgment less thoroughly to reduce their response efforts. Thus, weak satisficing results in merely satisfactory answers (e.g., selecting the first response option that seems acceptable). Third, under certain conditions respondents might simplify the response task even further by superficially interpreting questions and completely skipping the steps of information retrieval and judgment. Strong satisficing is indicated, among others, by providing random, nonsubstantive, or non-differentiated responses.

Since its introduction in survey methodology, the concept of satisficing has become one of the leading theoretical approaches in the examination and explanation of measurement error in surveys. With regard to its increasing popularity, we particularly welcome submissions that present advancements to the theory, introduce new methods to measure satisficing, show how satisficing theory can be applied to better understand the occurrence of observable response patterns, or present practical applications in question or survey design that aim at reducing satisficing in surveys.

Contributions may cover but are not limited to the following research topics:
- Theoretical advancements of satisficing theory
- Innovative measurement approaches (e.g., instructional manipulation checks, use of response latencies or other paradata)
- Consequences of satisficing (e.g., rounding/heaping, nonsubstantive answers to open-ended questions)
- Effects of survey mode on satisficing (e.g., findings from mixed-mode studies)
- Effects of the sampling methodology and sample characteristics on satisficing (e.g., comparisons of opt-in and probability-based online panels)
- Experimental evidence on how the occurrence of satisficing can be reduced (e.g., innovations in survey, question, or response scale design).

Paper Details

1. The Effect of Respondent Commitment on Response Quality in Two Online Surveys
Ms Kristen Cibelli Hibben (University of Michigan)

Answering questions completely, accurately and honestly is not always the top priority for survey respondents. To the extent that the inaccuracy in survey responses is due to insufficient effort by respondents, it might help to directly ask respondents to try harder and elicit an explicit agreement from them to do so. The rationale for this technique is that agreeing or stating one's intention to behave in a certain way commits a person to carry out the terms of the agreement. Charles Cannell and his associates pioneered this technique in the late1970s and the results were promising. Existing studies found that respondents in the commitment condition (vs. control) showed the following: significantly more mentions to open-ended items, number of health conditions, amount reported for food and drink consumed, higher mean score on reported precise-to-day index for health events, checking outside sources, and sensitive reporting (Oksenberg et al., 1977a; Oksenberg et al., 1977b). Similar results for commitment were observed in a telephone survey (Miller & Cannell, 1982). In an experimental web survey, Conrad et al. (under review) found commitment to improve response accuracy particularly among respondents with a college education or more (results for the lower education groups were not significant) and that only a very small percentage of respondents refused to make the commitment (1%).While promising, much of this research was conducted decades ago, in interviewer administered modes, with limited measures of data quality.

The proposed paper presents results from two web-based studies examining the effect commitment. The first study measures the effect of commitment – “yes” or “no” – in an online labor force survey. The experiment was embedded in a survey conducted by the Institute for Labor Market and Occupational Research (Institut für Arbeitsmarkt und Berurfsforschung (IAB)) in Germany fielded in December 2014 – January 2015. The second study measures the effect of asking respondents to commit to engaging in several specific response behaviors that seem likely promote data quality, such as reading the questions carefully, and trying to be as precise as possible, in an online survey of the parents of child patients at University of Michigan (UM) Health System. It was fielded in March – May 2016. Both studies examine the effect of commitment on response accuracy as verified in administrative records – previous studies evaluating commitment have only used indirect measures of accuracy – in addition to reducing satisficing behaviors, item nonresponse, and socially desirable reporting.

Both studies produced mixed results for the overall effect of commitment. However, in Study 1 there were some particularly promising results for those who committed versus those who were invited to commit but did not, and in Study 2 for those who committed to all of the requested response behaviors versus those who committed to engage in only a few. Overall, the results offer insights into the underlying level of motivation of web survey respondents, such as their willingness to look up information in records, and raise challenging practical questions about how such techniques might be used in production surveys.


2. Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes
Dr Wojciech Jablonski (Utrecht University & University of Lodz)
Dr Katarzyna Grzeszkiewicz-Radulska (University of Lodz)
Dr Aneta Krzewinska (University of Lodz)

In survey methodology, humanizing cues denote the procedures that imitate the interviewer and substitute some of the interviewer tasks (Tourangeau et al. 2003). Presenting a photo, an audio file, a video with the interviewer asking questions, or an animated person is considered the way to mobilize respondents and attract their attention. However, current methodological research on humanizing cues concentrate only on the interviewer effect and the social desirability bias; they do not cover the problem of the cognitive processes that are activated while answering the survey questions (Krosnick 1991; 1999).
This presentation reports on the results from an experiment conducted in November and December 2016 among university students (N=900) as part of the research project funded by the Polish National Science Center. This project aims to estimate the influence of humanizing cues on the quality of the data obtained in internet surveys. Although different data quality indicators were used, in the presentation we refer to those indicators that describe respondents’ tendency to shortcut cognitive processes (satisficing): (a) choosing non-substantive answers to attitude questions; (b)non-differentiation when giving multiple answers on the same response scale; (c) tendency to agree with any assertion, regardless of its content; (d) choosing options expressing approval for status quo; and (e) choosing the first reasonable option. The following types of Internet surveys were used in the experiment: (1) CAWI/text (with all stimuli presented in the form of text); (2) CAWI/photo (with stimuli presented in the form of text and an interviewer photo); and (3) CAWI/movie (with all stimuli presented in the form of video of real interviewers and, additionally, the answers presented in the form of text). Moreover, (4) CAPI was utilized within the experiment as an additional frame of reference. All versions of research tools reflect the growing extent of humanization of the research procedure.