ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Tuesday 18th July, 14:00 - 15:30 Room: Q2 AUD3

Satisficing in Surveys: Theoretical and Methodological Developments 2

Chair Dr Joss Rossmann (GESIS - Leibniz Institute for the Social Sciences )
Coordinator 1Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)
Coordinator 2Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)

Session Details

Satisficing theory (Krosnick 1991, 1999) provides a framework for the analysis of respondents’ response behaviors in surveys and, accordingly, the quality of their responses. The theory basically distinguishes between three response strategies: First, optimizing refers to the complete and effortful execution of all four cognitive steps of the response process. That is, respondents have to interpret the question, retrieve relevant information from their memory, form a judgment based on the available information, and translate the judgment into a meaningful answer. Second, if the task of answering a question is difficult and respondents lack the necessary abilities or motivation to provide an accurate answer, they might decide to perform the steps of information retrieval and judgment less thoroughly to reduce their response efforts. Thus, weak satisficing results in merely satisfactory answers (e.g., selecting the first response option that seems acceptable). Third, under certain conditions respondents might simplify the response task even further by superficially interpreting questions and completely skipping the steps of information retrieval and judgment. Strong satisficing is indicated, among others, by providing random, nonsubstantive, or non-differentiated responses.

Since its introduction in survey methodology, the concept of satisficing has become one of the leading theoretical approaches in the examination and explanation of measurement error in surveys. With regard to its increasing popularity, we particularly welcome submissions that present advancements to the theory, introduce new methods to measure satisficing, show how satisficing theory can be applied to better understand the occurrence of observable response patterns, or present practical applications in question or survey design that aim at reducing satisficing in surveys.

Contributions may cover but are not limited to the following research topics:
- Theoretical advancements of satisficing theory
- Innovative measurement approaches (e.g., instructional manipulation checks, use of response latencies or other paradata)
- Consequences of satisficing (e.g., rounding/heaping, nonsubstantive answers to open-ended questions)
- Effects of survey mode on satisficing (e.g., findings from mixed-mode studies)
- Effects of the sampling methodology and sample characteristics on satisficing (e.g., comparisons of opt-in and probability-based online panels)
- Experimental evidence on how the occurrence of satisficing can be reduced (e.g., innovations in survey, question, or response scale design).

Paper Details

1. Heaping in self-reported income data - a kind of weak satisficing
Ms Ariane Würbach (Leibniz Institute for Educational Trajectories (LIfBi))

To a large extent, research data in the social sciences originates from survey interviews. Important research objectives thus are the issue of non-response but also the accuracy of self-reported data. Heaping means aberrant concentrations of response values at specific points of the range, e.g. multiples of hundred or thousand, whereas all other responses are given at high precision. Heaping behavior in surveyed income data is an artifact which can be regarded as a form of satisficing. Krosnick (1991) adapted the bounded rationality theory of Simon (1955) and proposed a theory of statistical survey satisficing. Optimal question answering involves high cognitive effort and satisficing describes a cognitive strategy in which a respondent screens through several eligible options but immediately stops searching as soon as a sufficient outcome is achieved. The tendency for reduction of cognitive burden is related to the respondent’s ability and motivation but also to the task difficulty. Holbrook et. al (2014) also explored satisficing, measured as shorter response latencies and less accuracy (heaping). The prevalence of heaping is systematically higher in retrospective questions, especially when the respondent is either uncertain about the true value or hesitates to report. Analyses of the income data from the German National Educational Panel Study (NEPS) support this argument. Owing to task difficulty, heaping is more frequent in reports on household income than in reports on individual income. Moreover, the data at hand strongly support the assumption that interview duration is an indicator for the deepness of evaluation of the questions and eligible answers. Interview duration exhibits a significant relationship to response accuracy, i.e. respondents with longer interview durations less often resort to heaping. The educational level is considered as proxy for the respondent’s ability (Narayan & Krosnick, 1996) and the incentive level is supposed to increase the extrinsic motivation of respondents to be more inclined towards giving correct responses. It is to be shown in longitudinal analyses whether and to which degree heaping is a precursor of subsequent non-response (Serfling, 2006; Hanisch, 2003). Precision stability as well as possible transitions to other response styles like skipping items (strong satisficing) or dropping-out will be regarded over consecutive waves.

2. Question Difficulty and Measurement Error
Dr Henning Silber (GESIS - Leibniz-Institute for the Social Sciences)
Dr Tobias Gummer (GESIS - Leibniz-Institute for the Social Sciences)
Dr Joss Roßmann (GESIS - Leibniz-Institute for the Social Sciences)

Many studies have illustrated that question characteristics influence responses. For example, an ambiguous question causes more measurement error than a well-formulated question which is easy to understand and to answer. In this study, we will compare various measures of question difficulty such as the number of words, the number of response categories, question type, question sensitivity, and question complexity in their impact on respondents answering behavior. The results provide evidence whether the difficulty of the question influences the quality of the measurement. We also show how the different measures vary in their predictive ability.

3. Satisficing and Errors in Reporting Pensions: How Do Difficulty, Ability, and Motivation Influence the Errors in Reporting Pensions Due to Old Age?
Mr Patrick Lazarevic (TU Dortmund University)

The theory of satisficing provides a useful framework for examining measurement errors in surveys. It proposes that the respondent's ability, the respondent's motivation, and the difficulty of a question influence the quality of an answer and therefore measurement errors: A higher difficulty should increase errors while a greater ability and motivation should reduce them. A common problem in determining the amount of measurement errors in survey research is that the 'true value' is usually unknown. One possibility to obtain a 'true' reference value to compare the measurement to, is linking self-reports from a survey to process-produced or administrative data that represent the value in question.

Therefore, in order to determine the measurement error for self-reported pensions due to old age, administrative data from the German Pension Fund are linked to self-reported data of 917 participants (434 male, 483 female) from the Survey of Health, Ageing and Retirement in Europe (SHARE). The discrepancies between these data-sources relative to the administrative value are used as a dependent variable in structural equation models. As suggested by the theory, paths from the latent variables ability, difficulty, and motivation are specified.

Firstly, the latent variable 'ability' consists of a performance test of the respondent's memory, the respondent's self-rated memory, and interviewer ratings of the frequency that the respondent asked the interviewer for clarifications and understood the questions. This latent variable serves as a measure for the cognitive functioning of the respondent and should reduce errors. Secondly, the latent variable 'difficulty' of the question is represented by the share of the respondent's old-age pensions on the self-reported total household income, since it is probable that the amount of a source of income is easier to recall if it represents a bigger contribution to the household's income as a whole. Thirdly, for the operationalization of motivation, three items from an interviewer survey conducted by SHARE are matched to the individual data. Arguably, a greater willingness to react to the respondent's needs should increase the respondent's overall motivation to optimize their responses. Therefore the interviewer's self-rated willingness to explain questions to respondents, shorten long questions if respondents have problems concentrating, and the unwillingness to only exactly reread questions if the respondent has problems understanding a question are used as a proxy for individual 'motivation'. Additionally, age is used as a separate factor influencing both the ability of the respondent while also influencing the errors directly.

Preliminary results show insignificant path coefficients for ability, age, and motivation on errors while there are significant influences of age on ability and of difficulty on errors: Greater age leads to a lessened ability while a lower difficulty reduces errors. This suggests that the ability and motivation of a respondent are not the most influential factors when it comes to report single contributions to the income but rather its complexity and the importance of the inquired contribution.

4. The Factorial Survey: The Dependency of the Response Behaviour on the Presentation Format and the Answer Format of Vignettes
Professor Hermann Dülmer (University of Cologne)
Dr Hawal Shamon (University of Cologne)
Mr Adam Giza (Institut der Deutschen Wirtschaft Köln)

The factorial surveys is an experimental design in which the researcher constructs varying descriptions of situations or persons (vignettes) which will be judged by respondents under a particular aspect. An advantage of factorial surveys is that detailed vignette descriptions come closer to daily life than general survey questions (Beck/Opp 2001). Detailed descriptions also lead to a higher standardization. This contributes to solve the problem of different subjective frameworks because for answering the questions respondents do not have to retrieve relevant information any longer from their memory (cf. Tourangeau et al. 2000). Higher standardization, however, also means that choosing an optimal presentation format becomes more important: Some researchers prefer presenting vignettes in text format as short stories, others prefer presenting the central information of vignettes in a tabular format. For capturing the judgment behaviour sometimes a closed answer format (fixed answer scale) and sometimes an open answer format is used. Up to now no published studies exist that analyse the question whether the presentation format of vignettes has an impact on the answer behaviour of respondents (cf. Auspurg/Hinz 2015). With our study we try to reduce the existing research gap by focussing on the impact of different presentation formats on the occurrence of different satisficing strategies (Krosnik 1991). Based on an internet experiment conducted with a population sample we find evidence that tabular formats outperform text vignettes regarding vignette non-response as well as with respect to response time. This especially applies to less well educated and older people. The effect becomes even stronger when a closed instead of an open answer format is chosen.