ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Boost that respondent motivation! 3

Session Organisers Dr Marieke Haan (University of Groningen)
Dr Yfke Ongena (University of Groningen)
TimeFriday 21 July, 09:00 - 10:30
Room U6-11

Conducting surveys is harder than ever before: the overwhelming number of surveys has led to survey fatigue, and people generally feel less responsible to participate in surveys. The downward trend in response rates of surveys is a major threat for conducting high-quality surveys, because it introduces the potential for nonresponse bias leading to distorted conclusions. Also, even when respondents decide to participate, they may be reluctant to disclose information for reasons such as: dislike of the topic, finding questions too sensitive or too hard, or they can be annoyed by the length of the survey.

Therefore, surveyors need to come up with innovative strategies to motivate (potential) respondents for survey participation. These strategies may be designed for the general population but can also be targeted to specific hard-to-survey groups. For instance, machine learning methods may improve data collection processes (Buskirk & Kircher, 2021), the survey setting can be made more attractive (e.g., by using interactive features or videos), and reluctance to disclose sensitive information may for instance be reduced by using face-saving question wording (Daoust et al. 2021).

In this session we invite you to submit abstracts on strategies that may help to boost respondent motivation. On the one hand abstracts can focus on motivating respondents to start a survey on the other hand we also welcome abstracts that focus on survey design to prevent respondents from dropping out or giving suboptimal responses. More theoretically based abstracts, for example literature reviews, also fit within this session.

Keywords: nonresponse, innovation, motivation

Papers

Combining Closed-Based and Chatbot-Based Questions in Surveys: An Experiment with GPT-3

Mr pierre petronin (Google ) - Presenting Author
Dr Mario Callegaro (Google Cloud)

In this study, we explore the use of a hybrid approach in online surveys, combining traditional form-based closed-ended questions with open-ended questions administered by a chatbot.

We trained a chatbot using OpenAI's GPT-3 language model to produce context-dependent probes to responses given to open-ended questions. The goal was to mimic a typical professional survey interviewer scenario where the interviewer is trained to probe the respondent when answering an open-ended question.

For example, assume this initial exchange:
“What did you find hard to use or frustrating when using Google Maps?”
“It wasn't easy to find the address we were looking for”

The chatbot would follow-up with “What made it hard to find the address?” or “What about it made it difficult to find?” or “What steps did you take to find it?”.

The experiment consisted of a Qualtrics survey with 1,200 participants, who were randomly assigned to one of two groups. Both groups answered closed-ended questions, but the final open-ended question differed between the groups, with one group receiving a chatbot and the other group receiving a single open-ended question.

The results showed that using a chatbot resulted in higher quality and more detailed responses compared to the single open-ended question approach, and respondents indicated a preference towards using a chatbot to open-ended questions. However, respondents also noted the importance of avoiding repetitive probes and expressed dislike for the uncertainty around the number of required exchanges.

This hybrid approach has the potential to provide valuable insights for survey practitioners, although there is room for improvement in the conversation flow.


Does including guilt-free strategies in surveys decrease socially desirable responses? A replication study.

Miss Emma Zaal (University of Groningen) - Presenting Author
Dr Yfke Ongena (University of Groningen)
Professor John Hoeks (Univiersity of Groningen)

Socially Desirable Responding (SDR) arises when people want to portray a better version of themselves compared to how they actually behave. SDR negatively affects survey research data as it causes biased responses towards what respondents perceive as more socially acceptable. When questions are asked that touch upon delicate and sensitive topics, respondents are more likely to provide socially desirable reports. For example, in response to questions on norm compliant behavior (e.g., COVID-19 restrictions compliance) respondents are more likely to overreport while norm non-compliant behavior (e.g., substance use) is more likely to be underreported. Over the years, scholars from different backgrounds have developed a variety of strategies aimed at reducing socially desirable responses in surveys. To date, it is still unclear which strategies to lower SDR work best in order to obtain answers that are not, or only minimally biased towards societal approval. A promising new method to overcome SDR is proposed by Daoust et al. (2021). They show in experiments from 12 countries that if you include guilt-free strategies, participants are more likely to admit noncompliance with COVID-19 restrictions. A brief preamble (introduction) is added to the question, which includes information that other people also engage in non-compliant behavior. In addition, the response options are manipulated: a “face-saving” option is added (e.g., adding to Yes/No answer options, an “Only when necessary" option). We carried out a replication study and included two additional behavioral topics (sustainable behavior and responsible driving behavior) to examine whether face-saving strategies can also be effective in reducing SDR beyond questions on compliance with COVID-19 restrictions. In addition, compared to Daoust’s experimental design, we added a condition in which we test whether adding answer options without preamble is also effective in reducing SDR.


Satisficing in questionnaires: an experimental study on the effects of agree-disagree versus construct-specific and fully versus end labelled answer scales on answer patterns leading to data quality reduction

Mr Jeffry Frikken (University of Groningen) - Presenting Author
Dr Yfke Ongena (University of Groningen)
Dr Marieke Haan (University of Groningen)

Experimental studies that compared differences in data quality between agree-disagree (AD) versus construct-specific (CS) answer scales show mixed results: some studies have not found differences, whilst others found more valid scales and higher reliability for AD scales but still others did find so for CS scales; the latter are also found to be less prone to response effects (Dykema et al., 2021). Also research on the effects on data quality depending on the difference between fully versus end labelled answer scales resulted in mixed suggestions which of the two types is the preferred one to use. To contribute to knowledge on the differences in data quality when specific combinations of scale types are used, we conducted a 2*2 experiment and examined how AD and CS answer scales, that are either fully or end labelled, affect data quality.

This was performed by analysing the data in a probability based general population survey from the LISS Panel (n=2,411), which is representative for the Dutch population, on different types of satisficing answer strategies, among which acquiescence response style (ARS), extreme response style (ERS), and midpoint response style (MRS). This as satisficing is the tendency of survey respondents to adopt response strategies, resulting in answer patterns that lead to a reduction in data quality (Roberts et al., 2019). Especially AD scales may be more subject to ARS and ERS based on the scale characteristics, whilst CS scale items are often designed as end labelled only as they often do not allow for true midpoint labels.

Preliminary analyses indicate no difference in ERS between AD and CS scales, more ERS for end labelled scales compared with fully labelled scales, and no larger degree of ARS for AD scales compared with CS scales.


Why content matters: Improving respondents’ survey experience by varying the content of questionnaires

Ms Saskia Bartholomäus (GESIS Leibniz Institute for the Social Sciences) - Presenting Author
Mr Tobias Gummer (GESIS Leibniz Institute for the Social Sciences)

Having a positive experience when answering a survey can have a positive impact on participation rates and data quality. While previous research has focused on how certain question(naire) design elements such as overall length, comprehensibility of questions, and visual layout impact the respondents’ experience, much less attention has been paid to a questionnaire’s content itself. Prior studies have shown that interest in a topic matters for participation decisions and providing high quality answers. Yet, little do we know about how strategically providing content can be utilized to motivate respondents. Although researchers might be limited in changing the content of a survey, including a module of additional selected questions seems feasible. Consequently, with the present study, we aim to answer the research question whether it is possible to change respondent’s survey experience by intentionally changing a survey’s content. To investigate our research question, we conducted a non-probability web survey among 1.097 respondents of an online access panel in Germany. After completing a module with questions on politics, respondents were randomly assigned to either a second module of questions on politics or a module with questions on their subjective well-being. Following the experimental manipulation, we measured the respondents survey experience. Preliminary results indicate that the modules differed in how interesting and diverse they were perceived. The effects of the content were moderated by the respondents’ self-reported topic interests. For instance, respondents who reported a higher interest in answering political questions seem to have a better survey experience when answering two modules on politics compared to those with lower topic interest. In summary, we found a first indication that supplementing a questionnaire with content that is of interest to respondents can improve their overall survey experience. As utilizing these individual differences in survey experience could help to systematically improve data quality and increase participation among selected subgroups, we will discuss the implication of our findings for adaptive survey designs.


Defining the informed consent needs of patients in radiological healthcare in a video vignette experiment.

Dr Yfke Ongena (University of Groningen) - Presenting Author
Dr Marieke Haan (University of Groningen)
Dr Janina Wildfeuer (University of Groningen)

Vignettes, i.e., brief situational descriptions that help readers relate to the situation, are commonly used in communicating complex information on medical procedures, and their use is also increasing in surveys. With the aim of mapping the general populations’ view on the need and contents of informed consent procedures in radiological healthcare, we implemented an experimental study in a survey comparing written vignettes to animated video vignettes. Instructional animations are particularly accessible and improve the comprehension of health information. We therefore assume that the animated video vignettes allow participants to even more conveniently process complex information relevant to informed consent.

In the first part of our questionnaire, fielded within the LISS panel, a subsample of 400 respondents was randomly assigned to conditions showing 10 animated videos or simple texts on diagnostic and interventional radiology. Effects of the presentation format of vignettes are assessed via comparisons of response distributions, response patterns (i.e. consistency in answering behavior), respondents' answers to a question testing their comprehension of the vignette and questions asking them to evaluate the vignettes, as well as response times. Preliminary analyses of the first video, which had a duration of 18 seconds, showed that respondents in the animated video condition indeed spent more time on the page with the video and three questions on that video than respondents in the text condition. The item testing comprehension of the vignette did not show significant differences between the two conditions, but we did see effects of the amount of time spent on the page with the vignette: the longer respondents had spent on the page, the larger the chance of a correct answer. Respondents’ evaluation of complexity of the vignettes was more positive in the video condition than in the text condition, but other evaluations showed no significant differences.