ESRA logo

ESRA 2023 Program

              



All time references are in CEST

When and how to contact survey respondents

Session Organiser Dr Alessandra Gaia (University of Milano-Bicocca)
TimeFriday 21 July, 09:00 - 10:30
Room U6-06

In an era of declining response rates, it is increasingly important for survey practitioners and methodologists to identify best practices to obtain respondents' participation. First, the choice of the invitation mode (e.g. telephone, mail or email) may impact both on respondents’ propensity to participate in surveys and on their ability to participate within a short time frame (participation speed). Secondly, invitation timings (i.e. day of the week and time of the day) and timing of reminders may also influence respondents propensity to participate and/or response speed.

In defining contact strategies, survey practitioners/researchers also face a trade-off between survey burden and unit non-response: indeed, on one hand researchers may want to split a survey in different follow-up modules to reduce survey length (and, hence, response burden), on the other hand, administering survey follow-ups may lead to higher unit non-response or attrition, as respondents may not be willing to be interviewed an additional time.

The session discusses the effects of varying survey invitations modes and timings (day of the week and time of the day) on unit non-response and non-response bias. Also, it discusses trade-offs between item non-response (due to drop-off rate associated with response burden in longer survey instruments) and unit-non-response (due to lack of contact/participation in survey follow-ups).

Research on contacts modes and timing is particularly important nowadays, as the Covid-19 pandemic led to changes in time-use and schedules, which has not been fully reverted/may not revert to pre-pandemic habits; these changes might have important implications for the effectiveness of contact invitations and, more broadly, on survey participation.

Keywords: survey participation, contact modes, invitation modes

Papers

How to improve measurement of food insecurity? Implications of the recall period

Mr Andrej Kveder (UNHCR) - Presenting Author
Ms Imane Chaara (UNHCR)
Ms Lorenza Rossi (UNHCR)

The proposed research addresses the balance between recall bias over longer periods vs. too low reported incidence over shorter periods when measuring rare events. We propose a split ballot experiment to measure food insecurity over two recall periods of 7 and 30 days using the USAID Household Food Insecurity Access Scale (HFIAS) and the WFP Household Hunger Scale (HHS). This experiment is imbedded into a large-scale household survey on living conditions of refugees (UNHCR Forced Displacement Survey - FDS) in South Sudan.
Food security exists when all people, at all times, have physical and economic access to sufficient, safe, and nutritious food, that meets their dietary needs and food preferences for an active and healthy life (FAO, 1996). The Forcibly Displaced People (FDPs) very commonly have inadequate access to food in terms of quantity, diversity, and nutritional adequacy and many are considered as severely food insecure.
In the proposed experiment we will compare two instruments over two recall periods for two different populations (refugees and host population living in close proximity to refugees) . We will compare the HFIAS scale to the WFP HHS scale, which is a sub-scale of the first, focusing on items measuring severe food insecurity. The primary focus of the research will be on comparing the results for the two recall periods. In the longer recall period, we expect to capture higher incidence of severe events such as “going the whole day without food”, however, recall bias might be larger. When using the shorter recall period, the recall bias is expected to be smaller, but the measurement may miss many severe events of food insecurity.
The research will contribute to the discussion on improving measurement of food insecurity particularly in severe and protracted situations with the goal to improve policy and programmatic interventions.


Targeted timing of mail invitation: impact on web surveys response rate and response speed

Professor Annamaria Bianchi (University of Bergamo)
Professor Peter Lynn (Institute for Social and Economic Research, University of Essex)
Dr Alessandra Gaia (University of Milano-Bicocca) - Presenting Author

Survey researchers using online data collection methods continue to invest in efforts to identify ways of improving response rates and response speed (i.e. response within few days from the survey invitation). In particular, survey researchers are increasingly adopting tailored designs, where they no longer focus on the average effect of survey design features but are instead interested in the effect on subgroups of particular interest, namely those with otherwise low response rates or a propensity for slow response. This reflects a recognition that both outcomes of interest (response rate, response speed) and the effectiveness of design features may vary substantially over sample subgroups. In this framework and using experimental data from the Understanding Society Innovation Panel, we focus on the day of the week on which an invitation to participate in a web survey is mailed and investigate the effect of targeted timing for the initial invitation as well as subsequent individual reminders. In order to do so, we implemented an experiment on the mixed-mode sample of Wave 9 of the panel. Half of the mixed-mode sample received the email invitation and subsequent reminders according to standard procedures, while the other half was sent the invitation and reminders depending on which day they responded to the questionnaire in previous waves. Knowledge of the response behaviour is based on paradata from previous waves. We study whether the propensity of participation and speed of participation is different if receiving the invitation on a targeted timing basis compared to a standard one. We also examine the role of prior participation in the panel and potential differences between those who responded on small devices and others. We speculate that people receiving the invitation on a targeted timing basis might participate immediately, particularly if they can do so using mobile devices.


Comparing Immediate Versus Later Contact for Follow-On Studies for a Push-to-Web Population Survey

Mr Todd Hughes (University of California Los Angeles) - Presenting Author
Dr Ninez Ponce (University of California Los Angeles)
Mr Royce Park (University of California Los Angeles)
Mr Jiangzhou Fu (University of California Los Angeles)
Ms Kathy Langdale (SSRS)

The California Health Interview Survey (CHIS) is a population-based omnibus public health survey of the diverse population of California, that employs a postal mail push-to-web data collection approach, with telephone nonresponse follow-up. Information gathered from the CHIS can serve as a screening mechanism to identify rarer populations of interest to conduct a specialized follow-on survey. These follow-on surveys are ideal to provide robust data on the underrepresented and underserved, hard-to-reach, and small prevalent characteristic populations. The follow-on surveys can vary in length but usually last 15-20 minutes. Given the length of the CHIS is 45 to 60 minutes, and the risk or breaking off during long web surveys, we have historically asked for permission to send an invitation to participate in the follow-one study 2 or more weeks later. However, recent projects have explored the effectiveness of offering an incentive for respondents to continue immediately with a follow-on survey after completing the main CHIS survey, versus with a later contact.
This brief presents the results of three CHIS follow-on surveys conducted with Latino and Asian immigrants; Asian-American, Native Hawaiian, and Pacific Islanders; and persons in need of long-term services and support. We will present cooperation rates of respondents completing the follow-on surveys immediately after the CHIS versus at later contact point, as well as comparing cooperation rates of the length in time of the re-contacts. Additionally, we will also present any differences in cooperation rate of the follow-on survey due to differences in the initial CHIS survey mode (phone vs. web).


Did the Pandemic Impact Nonresponse Error in a Cross-National Project? An Analysis of Contact Attempts in Five European Countries

Ms Sofi Sinozich (Pew Research Center) - Presenting Author
Ms Carolyn Lau (Pew Research Center)
Dr Ashley Amaya (Pew Research Center)
Dr Patrick Moynihan (Pew Research Center)

The dramatic shifts in daily routines, technology adoption, and other lifestyle factors during the COVID-19 pandemic caused survey practitioners and methodologists alike to revisit many of the taken-for-granted assumptions informing best practices. Among these is that increased effort to reach randomly selected but harder-to-reach individuals will reduce the potential for systematic nonresponse bias, yielding a more representative sample as well as more accurate characterizations of public opinion.

As some initial evidence during the first year of the pandemic suggested that individuals may be more available to participate in surveys, perhaps due to restrictions on travel and gatherings and surges in remote opportunities, it is reasonable to question whether pre-pandemic standards of effort to reach respondents would hold. With rising costs and limited timeframes, a reduction in the effort required to reach a representative sample would be a nontrivial gain to survey researchers.

The Pew Research Center’s Global Attitudes Project has maintained a seven-call design in the European countries it surveys by phone, based on the research literature and successful past experiences. This presentation will consider whether our standard approach in the number and targeting of call attempts remains valid, and whether nonresponse bias could now be more effectively or efficiently mitigated with a reduced call design.

Using data from five European countries – France, Germany, Italy, Spain, and the United Kingdom – we will review the changing impact of successive call attempts across annual dual-frame surveys over the course of the pandemic, from 2019-2023, including contact, cooperation, and response rates; achieved sample demographics and weighting required for sample balancing; and estimates of public attitudes. Our analysis will also explore pandemic effects on paradata, such as the timing of successful contact attempts, and on the effectiveness of the dual-frame allocations.