ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Improving the representativeness, response rates and data quality of longitudinal surveys 4

Session Organisers Dr Jason Fields (US Census Bureau)
Dr Nicole Watson (University of Melbourne)
TimeThursday 20 July, 14:00 - 15:30
Room U6-11

Longitudinal survey managers are increasingly finding it difficult to achieve their statistical objectives within available resources, especially with the changes to the survey landscape brought by the COVID-19 pandemic. Tasked with interviewing in different populations, measuring diverse substantive issues, and using mixed or multiple modes, survey managers look for ways to improve survey outcomes. We encourage submission of papers on the following topics related to improving the representativeness, response rates and data quality of all, but especially longitudinal, surveys:

1. Adaptive survey designs that leverage the strength of administrative records, big data, census data, or paradata. For instance, what cost-quality tradeoff paradigm can be operationalized to guide development of cost and quality metrics and their use around the survey life cycle? Under what conditions can administrative records or big data be adaptively used to supplement survey data collection and improve data quality?

2. Adaptive survey designs that address the triple drivers of cost, respondent burden, and data quality. For instance, what indicators of data quality can be integrated to monitor the course of the data collection process? What stopping rules of data collection can be used across a multi-mode survey life cycle?

3. Papers involving longitudinal survey designs focused on improving the quality of measures of change over time. How can survey managers best engage with the complexities of such designs? How are overrepresented or low priority cases handled in a longitudinal context?

4. Survey designs involving targeted procedures for sub-groups of the sample aimed to improve representativeness, such as sending targeted letters, prioritising contact of hard-to-get cases, timing calls to the most productive windows for certain types of cases, or assigning the hardest cases to the most experienced interviewers.

5. Papers involving experimental designs or simulations aimed to improve the representativeness, response rates

Papers

Day of The Week and Time for Survey Dispatch: Effects on Participation Rates

Ms Maria Andreasson (The SOM Institutet, University of Gothenburg)
Ms Sophie Cassel (The SOM Institutet, University of Gothenburg) - Presenting Author
Ms Alexandra Garcia Nilsson (The SOM Institutet, University of Gothenburg)

Many factors can potentially affect participation rates in web surveys. Some of them are easier to affect than others. One factor that is relatively easy for survey researchers to control is the day and time of survey dispatch. Previous studies have found short-term effects on survey participation rates depending on dispatch day or time. The present study assesses 28 possible combinations of day (Monday-Sunday) and time (8 a.m., 12 p.m., 4 p.m. and 8 p.m.) of survey dispatch by administering an experiment in the Swedish Citizen Panel in the spring of 2020, and subsequent direct replication within the same sample provider in the fall of 2022. In both data collections, respondents were randomly assigned a day and time combination for when to be emailed an invitation and a reminder to complete a questionnaire. Preliminary results from 2020 show short-term differences in participation rate depending on dispatch day and time but no significant long-term effects. The results from this study will shed light on differences in the preferred day and time of survey dispatch across demographic groups as well as across different sampling strategies (i.e., probability and non-probability samples). These results will help researchers tailor the dispatch time and day in their type of sample in order to maximize participation rates.


Timing of Implementation of a Stopping Rule and Its Effects on Survey Errors and Costs

Mr Xinyu Zhang (University of Michigan) - Presenting Author
Dr James Wagner (University of Michigan)

Stopping effort on cases is an example of an intervention intended to balance survey costs and errors. To implement a rule that stops a subset of cases in the data collection process, a survey manager not only needs to choose which set of cases to stop, but also when to stop them. Implementing the stopping rule early on is preferred for maximizing cost savings, while decisions with reduced uncertainty can be made later as more data are collected. We introduce a risk-conscious stopping rule that considers the upper confidence bound of the cost-error tradeoff in stopping cases. We apply the stopping rule to the Health and Retirement Study and examine how timing of its implementation affects survey errors and costs at the end of the data collection. Several call attempt numbers are chosen as different timings. Our preliminary results suggest that the timing of implementing the stopping rule does not affect the quality of the estimated proportion of persons reporting good or better health. However, implementing the stopping rule earlier could lead to a larger cost reduction but a lower response rate.


Only call me when it’s raining out – Weather condition’s influence on survey unit (non-)response

Mr Jan Mackeben (Institut für Arbeitsmarkt- und Berufsforschung) - Presenting Author
Mr Sebastian Bähr (Institut für Arbeitsmarkt- und Berufsforschung)
Mr Jonas Beste (Institut für Arbeitsmarkt- und Berufsforschung)
Mrs Corinna Frodermann (Institut für Arbeitsmarkt- und Berufsforschung)

Methodical research has identified demographics, the timing of the day and week, paradata, past wave survey responses, employment characteristics, personality, health status, and many other aspects as predictors of nonresponse in interviewer-administered survey modes. However, other (unstudied) variables may influence the likelihood of nonresponse. One understudied and essential example could be the weather.

Most studies analyze weather as a daily or even more long-term influence. We argue that the immediate weather conditions at the time of the contact attempt should substantially influence (non) response. Past research has shown that the weather affects daily activities and the time spent at home. More precisely, fair weather (e.g., higher temperatures, no rain) provides an opportunity for many outdoor activities (e.g., swimming, concerts, camping, and outdoor sports). Thus, weather conditions affect the opportunity costs of participating in the survey. Contact attempts could be more successful during lousy weather compared to good weather.

In addition, the weather could influence the decision to participate in the survey by affecting the mental and physical health and life satisfaction. For instance, individuals could be more likely to reject a survey request during lousy weather than moderate weather conditions.

The present study addresses this research gap by analyzing the effect of different weather conditions on the probability of contact and response (given successful contact). We prepared a unique longitudinal data set, combining detailed contact data from a large-scale interviewer-administered German panel study with spatially and temporally fine-grained open-source weather information. With this data, we can study weather effects at the respondent’s precise location and at the time of the contact attempt. By factoring in the weather for survey field work, our results can help increase interviewer-administrated data collection efficiency.


ELIPA 2

Miss Claire Darrigade (Kantar Public) - Presenting Author
Mr Yves Fradier (Kantar Public)

The second edition of the ELIPA survey was conducted between 2019 and 2022 in three waves of questioning. The purpose of this survey is to understand the integration process in France during the three years following the issuance of a first long-term residence permit (permit of at least one year, excluding "student" permits) and the evaluation of the reception system set up by the Ministry of the Interior (the Republican Integration Contract - CIR). The survey is conducted face-to-face in metropolitan France in the 10 departments most populated by newcomers and in 10 languages (French, English, Arabic, Bangladeshi, Chinese, Spanish, Russian, Soninke, Tamil, Turkish).
The population targeted by this survey is very mobile: 25% of newcomers move during the year following the issuance of their residence permit (i.e., more than twice the annual mobility of people living in France) . In the context of a longitudinal survey, updating respondents' contact information is essential because it has a strong impact on the participation rate. This is why significant resources have been put in place to carry out quality inter-wave follow-up, particularly between wave 2 and wave 3 of ELIPA2, which are conducted two years apart.
The objective of this inter-wave follow-up was not only to update the contact information of the 5021 wave 2 respondents in order to be able to contact them more easily during wave 3, but also to maintain contact with the respondents in order to ensure the continuity of the study. Thus, the 5021 respondents from wave 2 were contacted three times by mail or telephone (plus an e-mail experiment), which made it possible to establish contact at least once with 96% of respondents. This was reflected in the participation rate for wave 3 since 80.7% of wave 2