ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Inattentiveness and Satisficing in Self-Administered Surveys 2

Session Organisers Dr Joss Roßmann (GESIS - Leibniz Institute for the Social Sciences, Germany)
Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences, Germany)
Dr Sebastian Lundmark (SOM Institute, Sweden)
Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences, Germany)
TimeTuesday 18 July, 14:00 - 15:30
Room U6-01b

Within the last two decades, survey data collection has increasingly moved from interviewer- to self-administration such as web questionnaires, or mixed-mode designs combining web and mail questionnaires. This trend was further exacerbated by the Corona pandemic during which interviewer-administered questionnaires became extensively more difficult to implement, leading practitioners to instead favor self-administration methods. However, methodological research has cautioned that the data quality of self-administered surveys may be more challenged by inattentive and/or satisficing respondents than when interviewers guide the respondents through the response process.

Therefore, the session Inattentiveness and Satisficing in Self-Administered Surveys welcomes submissions that present conceptual advancements in the field of detecting inattentive respondents, careless responding, and satisficing behavior in self-administered questionnaires. We particularly welcome proposals that introduce or evaluate new measurement methods (e.g., attention check items, paradata), as well as proposals that assess how questionnaire and question design can be applied to mitigate the problem of low-quality response in self-administered surveys.

Contributions may cover but are not limited to the following research topics:
• Conceptual advancements in the study of satisficing and careless responding.
• Innovative approaches and advancements in measuring respondent attentiveness and motivation (e.g., instructed response attention checks, instructional manipulation checks, bogus items, use of response latencies, or other paradata).
• Effects of survey design decisions (e.g., sampling method, (mixed) mode choice, questionnaire and question design, and fieldwork interventions) on respondent inattentiveness and/or survey satisficing.
• Experimental, observational, or simulation studies on the consequences of inattentiveness and/or satisficing for results of substantive analyses, experimental treatment effects, or survey data quality.

Keywords: Self-administered surveys, web surveys, mail surveys, survey design, survey data quality, inattentiveness, satisficing, careless response, attention checks

Papers

What to do with instructional manipulation checks in data analysis? Evidence from the ResPOnsE COVID-19 survey

Dr Riccardo Ladini (University of Milan) - Presenting Author
Dr Nicola Maggini (University of Milan)

The use of attention checks to assess the respondents’ attentiveness when answering online surveys have become rather common in social and political research. In particular, Instructional Manipulation Checks (IMCs) are methodological tools that ask respondents to follow a precise set of instructions when choosing an answer option. They were found to be effective in distinguishing between more and less attentive respondents, as the former tend to provide more consistent answers. Previous research showed that IMCs allow predicting the quality of the answers a respondent gives even in previous and subsequent waves of a panel survey. Moreover, the outcome of IMCs are socially patterned, as low-educated people – for instance – are more likely to fail IMCs when compared with high-educated ones.
Within such a framework, relevant questions for social researchers using survey data are the following: what are the implications of IMCs for data analysis? What to do with IMCs in multivariate analysis? While the use of attention checks in data analysis is more and more common to assess the internal validity of survey experiments, the same does not apply to traditional multivariate analysis using non-experimental survey data. Indeed, only a minority of the respondents who fail the IMCs provide inconsistent answers, thus discarding those cases is not recommended. Some researchers suggested showing separate analyses depending on the IMCs outcome – as a robustness check, others proposed to include the IMC as a moderator variable. But do multivariate analyses substantially change when not including the IMC? If so, when?
By using data coming from the ResPOnsE COVID-19 panel survey, carried out in Italy from April 2020 to December 2022, we aim at answering these questions. In particular, we aim at understanding whether IMCs have a differentiated impact on data analysis depending on the cognitive efforts required by the questions


New evidence on response biases time stability in online surveys: use of IRT models and response time data

Dr Marek Muszyński (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
Professor Artur Pokropek (Institute of Philosophy and Sociology, Polish Academy of Sciences)
Dr Tomasz Żółtak (Institute of Philosophy and Sociology, Polish Academy of Sciences)


The presentation aims to broaden knowledge on the temporal stability of survey response behaviours over time. In this study, we focus on response styles and careless responding, measured in a web survey across two occasions, separated by ca. 30 days (> 300 participants). The survey lasted around 30 minutes and comprised numerous screens with different measurement instruments, including personality, impulsivity, trust, reading behaviour and other scales.
Response styles time stability was evidenced (e.g. Weijters et al., 2010; Wetzel et al., 2016) but not with the use of newly proposed response styles models based on more advanced response styles indices: IRTrees (Boeckenholt, 2012; Khorramdel & von Davier, 2014) and multidimensional generalized partial credit models (Henninger & Meiser, 2019). Careless responding time stability is only preliminary studied (cf. Bowling et al., 2016; Camus, 2015). Our paper aims to fill in these research gaps by investigating the time stability of response styles and careless responding with the use of newly developed indicators and use of computer-based paradata, such as survey response times (Ulitzsch et al., 2022).
We aim to expand the previous research on survey behaviour temporal stability by providing additional information on response processes that occur repeatedly across measurements. We also replicate and address previous results by providing a wide set of measures on individual (e.g. personality, impulsivity, need for cognition) and situational (e.g. cognitive load, self-reported motivation, interest and difficulty) characteristics to test how much response behaviours are state-driven versus trait-driven (Danner et al., 2015). Finally, we expand the research by testing the time stability of response times indicators (total time, screen time and item-level time; Kroehne et al., 2019; Ulitzsch et al., 2022) as well as their ability to model careless responding and response styles in web surveys.


Do you agree? Do you strongly agree? Response categories and verification of substantive hypotheses

Professor Artur Pokropek (IFiS PAN) - Presenting Author
Dr Marek Muszyński (IFiS PAN)
Dr Tomasz Żółtak (IFiS PAN)

In quantitative social research, constructs are often measured using rating (e.g. Likert-type) scales. However, there has yet to be a consensus on how the number of response categories and their indication (e.g. labelling) affects the verification of substantive hypotheses. Moreover, response processes evoked by scales of different numbers of response categories and indications are only preliminary studied (Revilla et al., 2014).

This work tests three potential mechanisms which could differentiate the results obtained from scales of different lengths and types. First, we check whether scales of different response options induce different levels of engagement and attentiveness in answering items. We study this mechanism using process-data indices based on response time and cursor moves (Horwitz et al., 2017; Pokropek et al., 2022) and other indices (Meade & Craig, 2012), including self-reported attentiveness. Second, we test whether different response categories generate different levels of subjective interest and burden, as self-reported by respondents. Third, we test whether different response categories trigger more response styles (Wetzel et al., 2016). Finally, we examine how these mechanisms affect the convergence of results based on scales that differ in the number of response categories and labelling.

We present the results of a self-administered online survey experiment (> 2000 respondents) in which we randomly assigned respondents to scales of different numbers of response categories: 3-, 4-, 5-, 6-, 7-, 10- and 11-point and differently labelled categories. Although slight differences in response processes exist, they do not affect the verification of a substantial hypothesis (i.e., scale convergent validity). It seems that the choice of response categories type generally has little relevance to the substantive researcher (when examining relationships between variables).


Improving the Quality of Responses in Volunteer Web Survey Panels

Dr Andy Peytchev (RTI) - Presenting Author
Mrs Emily Geisen (Qualtrics)
Dr Emilia Peytcheva (RTI)

Members in volunteer web panels tend to participate in many panels on average (Gittelman & Trimarchi, 2009; Willems, Vonk, & Ossenbruggen, 2006). Consequently, most panel members receive numerous requests to participate in surveys each day. A concern is that these respondents do not always carefully read and respond to each question leading to low quality answers. Researchers may mitigate this by removing respondents who complete surveys too quickly, fail attention checks, or provide inconsistent answers.

Another option is to simplify the response task, making it easier for respondents to understand the question and answer thoughtfully. The norm in questionnaire design is to provide response options such as “Yes” and “No” that do not have meaning without thoughtful consideration of the question stem. Yet this may be contributing to suboptimal responding in web surveys where respondents are less motivated. If they misinterpreted the question stem, for example because they read too quickly, the response options will not alert them to this. Two possible alternatives to traditional “yes/no” options are (1) to provide more complete response options that carry over meaning from the question stem, or (2) to use the main part of the question in the response options. We hypothesize that this manipulation will be most effective when the question is long or complex, by introducing additional information in the response options. Additionally, it should be more effective on weak opinions or when an opinion is not yet formed.

In a survey of approximately 4,000 web survey panel members, we randomly assigned respondents to the three response option approaches. We also designed questions for the three hypothesized conditions with questions on polarized topics, issues that are not polarized for most people, and questions that introduce new or surprising information.