All time references are in CEST
Boost that respondent motivation! 2 |
|
Session Organisers | Dr Marieke Haan (University of Groningen) Dr Yfke Ongena (University of Groningen) |
Time | Thursday 20 July, 16:00 - 17:30 |
Room | U6-08 |
Conducting surveys is harder than ever before: the overwhelming number of surveys has led to survey fatigue, and people generally feel less responsible to participate in surveys. The downward trend in response rates of surveys is a major threat for conducting high-quality surveys, because it introduces the potential for nonresponse bias leading to distorted conclusions. Also, even when respondents decide to participate, they may be reluctant to disclose information for reasons such as: dislike of the topic, finding questions too sensitive or too hard, or they can be annoyed by the length of the survey.
Therefore, surveyors need to come up with innovative strategies to motivate (potential) respondents for survey participation. These strategies may be designed for the general population but can also be targeted to specific hard-to-survey groups. For instance, machine learning methods may improve data collection processes (Buskirk & Kircher, 2021), the survey setting can be made more attractive (e.g., by using interactive features or videos), and reluctance to disclose sensitive information may for instance be reduced by using face-saving question wording (Daoust et al. 2021).
In this session we invite you to submit abstracts on strategies that may help to boost respondent motivation. On the one hand abstracts can focus on motivating respondents to start a survey on the other hand we also welcome abstracts that focus on survey design to prevent respondents from dropping out or giving suboptimal responses. More theoretically based abstracts, for example literature reviews, also fit within this session.
Keywords: nonresponse, innovation, motivation
Dr Yfke Ongena (University of Groningen) - Presenting Author
Dr Marieke Haan (University of Groningen)
Dr Janina Wildfeuer (University of Groningen)
Vignettes, i.e., brief situational descriptions that help readers relate to the situation, are commonly used in communicating complex information on medical procedures, and their use is also increasing in surveys. With the aim of mapping the general populations’ view on the need and contents of informed consent procedures in radiological healthcare, we implemented an experimental study in a survey comparing written vignettes to animated video vignettes. Instructional animations are particularly accessible and improve the comprehension of health information. We therefore assume that the animated video vignettes allow participants to even more conveniently process complex information relevant to informed consent.
In the first part of our questionnaire, fielded within the LISS panel, a subsample of 400 respondents was randomly assigned to conditions showing 10 animated videos or simple texts on diagnostic and interventional radiology. Effects of the presentation format of vignettes are assessed via comparisons of response distributions, response patterns (i.e. consistency in answering behavior), respondents' answers to a question testing their comprehension of the vignette and questions asking them to evaluate the vignettes, as well as response times. Preliminary analyses of the first video, which had a duration of 18 seconds, showed that respondents in the animated video condition indeed spent more time on the page with the video and three questions on that video than respondents in the text condition. The item testing comprehension of the vignette did not show significant differences between the two conditions, but we did see effects of the amount of time spent on the page with the vignette: the longer respondents had spent on the page, the larger the chance of a correct answer. Respondents’ evaluation of complexity of the vignettes was more positive in the video condition than in the text condition, but other evaluations showed no significant differences.
Mr Jeffry Frikken (University of Groningen) - Presenting Author
Dr Yfke Ongena (University of Groningen)
Dr Marieke Haan (University of Groningen)
Experimental studies that compared differences in data quality between agree-disagree (AD) versus construct-specific (CS) answer scales show mixed results: some studies have not found differences, whilst others found more valid scales and higher reliability for AD scales but still others did find so for CS scales; the latter are also found to be less prone to response effects (Dykema et al., 2021). Also research on the effects on data quality depending on the difference between fully versus end labelled answer scales resulted in mixed suggestions which of the two types is the preferred one to use. To contribute to knowledge on the differences in data quality when specific combinations of scale types are used, we conducted a 2*2 experiment and examined how AD and CS answer scales, that are either fully or end labelled, affect data quality.
This was performed by analysing the data in a probability based general population survey from the LISS Panel (n=2,411), which is representative for the Dutch population, on different types of satisficing answer strategies, among which acquiescence response style (ARS), extreme response style (ERS), and midpoint response style (MRS). This as satisficing is the tendency of survey respondents to adopt response strategies, resulting in answer patterns that lead to a reduction in data quality (Roberts et al., 2019). Especially AD scales may be more subject to ARS and ERS based on the scale characteristics, whilst CS scale items are often designed as end labelled only as they often do not allow for true midpoint labels.
Preliminary analyses indicate no difference in ERS between AD and CS scales, more ERS for end labelled scales compared with fully labelled scales, and no larger degree of ARS for AD scales compared with CS scales.
Dr Alessandra Gaia (University of Milano-Bicocca) - Presenting Author
Professor Emanuela Sala (University of Milano-Bicocca)
Dr Chiara Respi (University of Milano-Bicocca)
Professor Guido Legnante (University of Pavia)
Motivational statements are often used in web surveys to simulate the interviewer presence, with the aim of minimising item non-response and non-response bias. We analyse the effect of motivational statements and assess whether they have any detrimental effect on drop-off rates, non-response in subsequent items and consent to linkage with Twitter data. In order to address our aims, we use randomise experimental data from a survey on attitudes towards passive data extraction implemented in an opt-in panel of the Italian population (N=2,249). A random subsample of respondents not providing a valid answer to a sensitive question on which political party they voted at the most recent election received a motivational statement; this was a privacy reassurance (in case respondents selected “prefer not to say”) or an invitation to try to recall the relevant information (in case they selected “don’t know”). The control group did not receive any motivational statement. We assess whether: i) motivational statements increase the number of valid responses; ii) the response distribution differed with/without the motivational statements and how does it compare with the “true” voting turnout and election outcome; iii) any impact of motivational statements on drop-offs, non-response in subsequent items and consent to data linkage; iv) whether the effect of motivational statements is moderated by socio-demographic characteristics and attitudes towards sharing data online. The study contributes to the literature on the topic in a novel way: first, it includes covariates on respondents’ attitudes towards privacy; second, it compares the voting distribution with “true” electoral outcomes; third, it assesses the effectiveness of motivational statements in opt-in panels, where these prompts are not as widely adopted as in large scale probability-based studies. The implication of empirical results for survey practice are discussed.
Ms Anouk Zabal (GESIS – Leibniz Institute for the Social Sciences) - Presenting Author
Ms Silke Martin (GESIS – Leibniz Institute for the Social Sciences)
Dr Britta Gauly (GESIS – Leibniz Institute for the Social Sciences)
Dr Sanja Kapidzic (GESIS – Leibniz Institute for the Social Sciences)
Ms Natascha Massing (GESIS – Leibniz Institute for the Social Sciences)
After more than two years in the COVID-19 pandemic, the data collection for the second cycle of PIAAC, the Programme for the International Assessment of Adult Competencies, took place from September 2022 to April 2023. PIAAC is an international survey that measures key skills in a face-to-face interview based on a random sample of adults 16 to 65 years of age. Because face-to-face fieldwork in Germany was extremely reduced in the preceding pandemic years, new challenges were to be expected. The current contribution will present the German PIAAC approach to motivating survey participation and discuss strategies and experiences during fieldwork.
On the respondent side, one focus of fieldwork preparation was placed on developing appropriate outreach materials and fieldwork measures to boost respondent cooperation, the idea being that with a varied bouquet, target persons from all walks of life would find something that appealed to them.
On the interviewer side, a five-day in-person interviewer training was carried out. Beyond providing comprehensive training on the survey protocols, one of the objectives was to motivate the interviewers and make them enthusiastic ambassadors for the study, and, as such, excellent recruiters. Given that the PIAAC interview is over 2 hours on average, another important aspect was to equip interviewers with strategies to motivate respondents not only to participate in such a long interview, but to maintain their engagement during the interview.
The COVID-19 pandemic did leave its trace on the face-to-face survey field, and the data collection was challenging. Various strategies were explored during fieldwork to tailor and intensify measures to reach and gain cooperation from target persons, although reaching under-represented target groups remained a challenge.
Mr pierre petronin (Google ) - Presenting Author
Dr Mario Callegaro (Google Cloud)
In this study, we explore the use of a hybrid approach in online surveys, combining traditional form-based closed-ended questions with open-ended questions administered by a chatbot.
We trained a chatbot using OpenAI's GPT-3 language model to produce context-dependent probes to responses given to open-ended questions. The goal was to mimic a typical professional survey interviewer scenario where the interviewer is trained to probe the respondent when answering an open-ended question.
For example, assume this initial exchange:
“What did you find hard to use or frustrating when using Google Maps?”
“It wasn't easy to find the address we were looking for”
The chatbot would follow-up with “What made it hard to find the address?” or “What about it made it difficult to find?” or “What steps did you take to find it?”.
The experiment consisted of a Qualtrics survey with 1,200 participants, who were randomly assigned to one of two groups. Both groups answered closed-ended questions, but the final open-ended question differed between the groups, with one group receiving a chatbot and the other group receiving a single open-ended question.
The results showed that using a chatbot resulted in higher quality and more detailed responses compared to the single open-ended question approach, and respondents indicated a preference towards using a chatbot to open-ended questions. However, respondents also noted the importance of avoiding repetitive probes and expressed dislike for the uncertainty around the number of required exchanges.
This hybrid approach has the potential to provide valuable insights for survey practitioners, although there is room for improvement in the conversation flow.