ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Inattentiveness and Satisficing in Self-Administered Surveys 1

Session Organisers Dr Joss Roßmann (GESIS - Leibniz Institute for the Social Sciences, Germany)
Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences, Germany)
Dr Sebastian Lundmark (SOM Institute, Sweden)
Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences, Germany)
TimeTuesday 18 July, 11:00 - 12:30
Room U6-01e

Within the last two decades, survey data collection has increasingly moved from interviewer- to self-administration such as web questionnaires, or mixed-mode designs combining web and mail questionnaires. This trend was further exacerbated by the Corona pandemic during which interviewer-administered questionnaires became extensively more difficult to implement, leading practitioners to instead favor self-administration methods. However, methodological research has cautioned that the data quality of self-administered surveys may be more challenged by inattentive and/or satisficing respondents than when interviewers guide the respondents through the response process.

Therefore, the session Inattentiveness and Satisficing in Self-Administered Surveys welcomes submissions that present conceptual advancements in the field of detecting inattentive respondents, careless responding, and satisficing behavior in self-administered questionnaires. We particularly welcome proposals that introduce or evaluate new measurement methods (e.g., attention check items, paradata), as well as proposals that assess how questionnaire and question design can be applied to mitigate the problem of low-quality response in self-administered surveys.

Contributions may cover but are not limited to the following research topics:
• Conceptual advancements in the study of satisficing and careless responding.
• Innovative approaches and advancements in measuring respondent attentiveness and motivation (e.g., instructed response attention checks, instructional manipulation checks, bogus items, use of response latencies, or other paradata).
• Effects of survey design decisions (e.g., sampling method, (mixed) mode choice, questionnaire and question design, and fieldwork interventions) on respondent inattentiveness and/or survey satisficing.
• Experimental, observational, or simulation studies on the consequences of inattentiveness and/or satisficing for results of substantive analyses, experimental treatment effects, or survey data quality.

Keywords: Self-administered surveys, web surveys, mail surveys, survey design, survey data quality, inattentiveness, satisficing, careless response, attention checks

Papers

Comparing different types of respondents’ attentiveness measures: Experimental evidence from the German Internet Panel and the Swedish Citizen Panel

Dr Sebastian Lundmark (University of Gothenburg) - Presenting Author
Dr Henning Silber (GESIS – Leibniz Institute for the Social Sciences)
Dr Joss Roßmann (GESIS – Leibniz Institute for the Social Sciences)
Dr Tobias Gummer (GESIS – Leibniz Institute for the Social Sciences)

Survey research relies on respondents’ cooperation during the interview. Thus, researchers have started to measure respondents’ attentiveness to control for attention levels in their analyses (e.g., Berinsky et al., 2016). While various attentiveness measures have been suggested, there is limited experimental evidence comparing different measurement types regarding their pass and failure rates. A second issue that received little attention is false positives when implementing attentiveness measures (Curran & Hauser, 2019). Some respondents are aware that their attentiveness is being measured and decide not to comply with the instructions in the attention measurement, leading to incorrect identification of (in)attentiveness. To address these research gaps, we randomly assigned respondents to different types of attentiveness measures within the German Internet Panel (GIP), a probability-based online survey (N=2,900), and the non-probability online part of the Swedish Citizen Panel (SCP) (N=3,800). Data were collected in the summer and winter of 2022. The attentiveness measures included an instructed response item, a bogus item, and two numeric counting tasks. In the GIP study, respondents were randomly assigned to one of four attention measures and then reported whether they purposefully complied with the instructions or not. The SCP study replicated and extended the GIP study. In the SCP study, respondents were randomly assigned to one early and one late attentiveness measure to explore whether attention was improved by implementing the measure. Respondents also reported their attitudes toward attention measures and whether they understood the instructions. Altogether, we discuss the usability of four different attentiveness measures especially concerning false positives, and whether they can improve response quality.


Attention check failures in online surveys: On the influence of survey design options and respondent specific characteristics

Dr Hawal Shamon (Forschungszentrum Jülich) - Presenting Author

Online surveys are becoming more and more an important administration form in social science. They are even being used in national probability-based online panels such as the German GESIS Panel and the Dutch LISS Panel and have also been tested in a multinational survey pilot project of the European Social Survey (CROss-National Online Survey CRONOS). Compared to the more traditional interviewer-based modes, such as face-to-face or telephone interviews, online surveys may suffer from the physical distance and the concomitant anonymity during the measurement process since latter makes it easier for respondents to evade the question-and-answer process and to provide so-called careless answers to survey questions. As a counter measure, researchers have made use of attention checks to detect respondents that engage in careless responding. Nowadays, attention checks even seem to have become a standard instrument for measuring survey participants’ attentiveness during online interviews. Still little is known on the extent to which (survey) design options reduce careless responding as well as on the underlying process of attention check failures. The clarification of latter issue is all the more important since practitioners use attention checks as decisional basis during the data-cleaning process whereas the use of attention checks is not without controversy. In order to contribute to a better understanding of these issues, we analyze survey data from different online studies for which participants have been gathered from commercial access panels via quota sampling in Germany. Our theoretically based analysis of attention check failures considers both experimentally varied and respondent-specific factors and allows for nuanced conclusions on the use of attention checks in online surveys based on commercial access panels.


An Innovative Approach to the Measurement of Satisficing in Open-ended Questions from Web Probing

Miss Dörte Naber (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Miss Maria del Carmen Navarro-Gonzalez (University of Granada)
Mr Jose-Luis Padilla (University of Granada)

Self-administered surveys pose a greater response burden on respondents in comparison to personal interviews. This issue is even more evident for open-ended questions compared to closed-ended questions as respondents have to make up their mind themselves and elaborate an answer in their own words (e.g., Zuell & Scholz, 2015). Consequently, respondents might try to reduce their response burden by applying satisficing strategies (Krosnick, 1991) that in turn result in low-quality survey data. We will present an innovative approach to define and measure satisficing behavior in open-ended questions beyond the common concept of nonresponse (Behr et al., 2017, 2020). Furthermore, we will show how task difficulty, respondents’ motivation and respondents’ ability are related to satisficing in open-ended questions. We will use data from an experimental Web Probing study run at the university of Granada in 2019 (N=561 German respondents). The open-ended questions are three subsequent probe questions to the same survey item (“Taking all things together, how happy would you say you are?”). The probe questions are a category-selection probe (“Please explain why you selected “xxx”), a specific probe (“What areas of life did you have in mind when you were answering the question?”), and a comprehension probe (“What do you understand by "being happy"?”) designed to introduce different levels of task difficulty. The respondents were randomly assigned to one of two probe sequences assuming that response burden increases while respondents’ motivation decreases across probe questions. Respondents’ ability was measured by educational attainment and an item on practice at thinking about the topic. Based on the results, researchers will be able to detect more clearly potential satisficing behavior to open-ended questions in self-completion and identify related respondents’ and survey characteristics, eventually leading to better data