All time references are in CEST
Improving the representativeness, response rates and data quality of longitudinal surveys 1 |
|
Session Organisers | Dr Jason Fields (US Census Bureau) Dr Nicole Watson (University of Melbourne) |
Time | Wednesday 19 July, 11:00 - 12:30 |
Room | U6-01b |
Longitudinal survey managers are increasingly finding it difficult to achieve their statistical objectives within available resources, especially with the changes to the survey landscape brought by the COVID-19 pandemic. Tasked with interviewing in different populations, measuring diverse substantive issues, and using mixed or multiple modes, survey managers look for ways to improve survey outcomes. We encourage submission of papers on the following topics related to improving the representativeness, response rates and data quality of all, but especially longitudinal, surveys:
1. Adaptive survey designs that leverage the strength of administrative records, big data, census data, or paradata. For instance, what cost-quality tradeoff paradigm can be operationalized to guide development of cost and quality metrics and their use around the survey life cycle? Under what conditions can administrative records or big data be adaptively used to supplement survey data collection and improve data quality?
2. Adaptive survey designs that address the triple drivers of cost, respondent burden, and data quality. For instance, what indicators of data quality can be integrated to monitor the course of the data collection process? What stopping rules of data collection can be used across a multi-mode survey life cycle?
3. Papers involving longitudinal survey designs focused on improving the quality of measures of change over time. How can survey managers best engage with the complexities of such designs? How are overrepresented or low priority cases handled in a longitudinal context?
4. Survey designs involving targeted procedures for sub-groups of the sample aimed to improve representativeness, such as sending targeted letters, prioritising contact of hard-to-get cases, timing calls to the most productive windows for certain types of cases, or assigning the hardest cases to the most experienced interviewers.
5. Papers involving experimental designs or simulations aimed to improve the representativeness, response rates
Ms Line Knudsen (National Centre for Social Research)
Mr Martin Wood (National Centre for Social Research) - Presenting Author
Mr Samantha Spencer (National Centre for Social Research)
Mr Chujan Sivathasan (National Centre for Social Research)
Improving understanding of the experiences of young people with special educational needs and disabilities (SEND) as they move through the education system is key for policy makers and researchers looking to address inequalities in educational experiences and outcomes. A large-scale feasibility study run by the National Centre for Social Research and funded by the Department for Education experimentally explored approaches to maximising response from under-represented groups within the SEND population in England - specifically, children who are 'looked after' or otherwise deemed vulnerable, those from lower income households, and those from ethnic minority backgrounds.
In summer 2022 the first wave of fieldwork was undertaken with pupils aged 12-13 and their parent or carer. Fieldwork was conducted across two separate strands: Strand 1 which adopted a full face-to-face approach, and Strand 2 which was web only.
Several experiments were devised to assess the effectiveness of the fieldwork approach. For Strand 1 (F2F), experiments included testing the effect of providing a day of specialist SEND-related training to interviewers; testing a lower value (£5) unconditional incentive versus a higher value (£10) conditional incentive; using tailored versus 'standard' advance materials. For Strand 2 (web) experiments included testing higher value versus lower value conditional incentives (£10 / £5 per participant); using tailored versus 'standard' advance materials; a shorter versus longer interview (20 minutes versus 30 minutes); and the provision of a pre-notification letter versus no pre-notification letter.
With an emphasis on response, the presentation will outline and discuss results of the analysis of the experiments, and reflect on what the findings mean for a potential future longitudinal study of young people with SEND in England, as well as any wider implications for surveys looking to engage with similar audiences, including those who are often deemed harder-to-reach.
Mr Paul Burton (University of Michigan - Survey Research Center) - Presenting Author
Mr Andrew Hupp (University of Michigan - Survey Research Center)
Ms Eva Leissou (University of Michigan - Survey Research Center)
Dr Brady West (University of Michigan - Survey Research Center)
Every six years, the Health and Retirement Study adds a new age cohort to its existing longitudinal panel. In 2022, a portion of the sampled households were invited to complete the screening questionnaire via the web. Cases were randomly assigned to one of two protocols: 1) push-to-web with in-person follow-up for non-responding cases, and 2) in-person first. Post-COVID, returning to in-person work has presented staffing challenges, both in initial interviewer recruitment and higher attrition than in prior screening waves. The lower than expected number of interviewing staff has led us to design a set of follow-up protocols to be efficient with the interviewer coverage that we do have. The adaptive follow-up protocols are based on one of six triggers: 1) safety concerns, 2) limited access, 3) work permit denied, 4) initial resistance, 5) 10+ in-person contact attempts, or 6) 12 weeks in the field with fewer than 10 contact attempts. When a trigger is met, 20% of cases are randomly assigned to one of two protocols. The protocol for the initially assigned push-to-web cases included 1) phone follow-up for two weeks, 2) six additional in-person attempts, and 3) finalizing the case with the appropriate outcome. The protocol for the initially assigned in-person cases included 1) phone follow-up for two weeks, 2) web protocol for four weeks, and 3) six additional in-person attempts. In this presentation, we will report on the effectiveness of each of these protocols, for each of the six types of non-responding households, in producing completed screeners and controlling costs. Preliminary results suggest that the control arm of the experiment is the most effective in terms of raw yield.
Mr Marc Plate (Statistics Austria) - Presenting Author
For the first time in Austria, a long running socio-economic household panel will be implemented. The Austrian Socio-Economic Panel (ASEP) will produce datasets from different sources, administrative data and survey data, that will be able to be linked.
This paper will present selected survey design aspects of the planned ASEP survey pilot, which will field in 2023Q4:
(1) Use of a “Push-to-Web” v.s. “Tailored-Mode (WEB or CAWI) as predicted by administrative data models” Mixed-Mode experiment
(2) Use of pre-, post- and tailored designed in-between-wave incentives
(3) Use of mobile first questionnaire design
(4) Use of Realtime data quality monitoring, combining administrative data, paradata and surveydata.
Practical examples of how these key survey design aspects are planned to be implemented during field phase will be demonstrated and their possible impact on data collection cost, survey quality and respondent burden will be discussed.
Dr Piotr Cichocki (Faculty of Sociology; Adam Mickiewicz University, Poznan) - Presenting Author
Dr Marta Kołczyńska (Institute of Political Studies of the Polish Academy of Sciences)
Dr Piotr Jabkowski (Faculty of Sociology; Adam Mickiewicz University, Poznan)
Most face-to-face cross-national surveys use sampling designs involving the selection of households, either from a list of addresses or via area sampling with field enumeration, followed by an interviewer-administrated within-household selection of target respondents. Two sampling methods for selecting individuals within households were frequently applied in cross-national surveys: the Kish grid and birthday procedures. Kish grid selection is expected to result in higher quality, yet its requirement to compile a complete household inventory of all eligible units may increase refusals. For birthday methods interviewer only ask which household member fulfills the birthday rule, which is expected to result in lower refusals but at the cost of depreciated sample quality due to greater opportunities for interviewers or respondents to interfere with the selection. Besides, the Rizzo method (in combination with the Kish grid or birthday procedure) was recently implemented in the ESS. This procedure avoids collecting information about the household members for the majority of households.
This presentation aims to analyze the impact of within-household selection on two outcomes related to sample quality, i.e., refusal rates and selection bias. We analyzed almost 250 national surveys from all rounds of the ESS. Based on survey documentation, we split refusals into refusals during the respondent selection process and refusals provided by the target respondent after the successful selection. Our findings confirmed that the listing all eligible units is intrusive for household members, as we found that procedures avoiding this step result in lower refusals before the target respondents were sampled. However, there was no difference between the selection procedures in refusals by respondents after successful selection. We also confirmed that the Kish grid and the Rizzo method with the Kish component provide higher-quality samples despite boosting refusals before or during the selection.
Dr Stephanie Coffey (U.S. Census Bureau) - Presenting Author
Adaptive and responsive survey designs (ARDs) provide a framework for balancing survey errors and costs, allowing survey managers to effectively utilize limited resources to improve data quality. Typically, these types of designs are applied to surveys with standalone data collection periods, as opposed to longitudinal or pseudo-longitudinal (i.e., repeated cross-sectional) surveys. This means that the interventions applied only impact the outcomes of the data collection period in which they are applied. ARDs become more complicated when interventions in one period can impact outcomes for data collection periods not yet observed, and can even influence interventions made in future data collection periods.
The Census Bureau is researching how to implement an ARD in the American Community Survey (ACS) to control data collection costs accrued by their interviewer follow-up operation. While the ACS is not a longitudinal survey, it is a repeated cross-sectional survey that combines survey data collected from 12 (and 60) panels over 12 (and 60) months to generate annual (and 5-year). As a result, stopping work on cases in a suboptimal way can have an impact on published estimates and their margins of error for years.
As part of this research, simulations were conducted to determine how to balance several competing criteria: the need to control costs while or increasing maintaining representativeness; the need to maintain representativeness without reducing effort repeatedly in the same geographies; and the need to reduce effort without decreasing sample size or inflating CVs of geographic domains to unacceptable levels.
This presentation discusses each of these criteria, how we incorporated them into an optimization strategy and objective function, and results of simulations that demonstrate the operational impact on cases identified for intervention, as well as impact on cost and representativeness, of ignoring or incorporating the competing criteria.