ESRA logo

ESRA 2021 Program at a glance



Response and data quality in times of COVID-19

Session Organiser Dr Jonathan Burton (ISER, University of Essex)
TimeFriday 16 July, 15:00 - 16:30

In early 2020 the WHO declared COVID-19 a pandemic. Face-to-face interviewing was suspended, and governments across the world enacted policies to reduce the spread of the virus. The virus and the associated government measures affected the health, employment, income, social life, schooling, and nearly all other aspects of everyday life. During this time, the appetite for data increased as researchers and policy-makers needed information to measure the effects of the pandemic on society.

This session looks at the effect of the COVID-19 pandemic on data collection: the response to surveys, and the quality of the data collected. The presentations cover multiple countries and different modes, looking at response and response bias, the impact on measurement and data quality.

Keywords: pandemic, COVID-19, survey response, data quality

Lockdown measures and time availability: Covid-19 Impact on survey response rates and nonresponse bias

Dr Caroline Roberts (University of Lausanne) - Presenting Author
Dr Michèle Ernst Stähli (FORS)

Anecdotal evidence and research conducted during the Covid-19 pandemic attests to numerous challenges around survey data collection, and the potential advantages and disadvantages of using different modes of data collection. While web-based data collection with self-administered questionnaires was considered the most viable data collection mode in the Spring 2020 lockdown period, it carries the disadvantage of low response rates and hence a potentially greater risk of non-response bias. Survey non-response is often associated with time availability and being ‘too busy’ is a frequently cited reason for refusal. On the one hand, lockdown measures increase time availability for participation in surveys, especially among those in paid work who are unable to continue their usual activities from home, potentially leading to an increased motivation to respond (among certain subgroups), and hence an increase in response rates. On the other hand, for those working from home, especially those with childcare responsibilities, increased demands on time (and cognitive load) may impact negatively on availability and motivation to participate in surveys. To the extent that survey response propensity correlates with key measures of interest in the survey, there is a potential for nonresponse bias – even if response rates increase overall. We investigate this in the present study using data from a probability-based web panel survey, designed to investigate the impact of the Coronavirus pandemic on social attitudes and behaviours in the Swiss general population. Capitalising on the longitudinal design of the study, as well as the availability of linked administrative data from population registers, we investigate patterns of unit non-response and attrition across panel waves, and assess their likely impact on the accuracy of target survey variables likely to be correlated with time availability, including, for example, self-reported changes in the amount of time devoted to childcare and housework.


The External Validity of Self-reported Mobility: Comparing Survey and Mobile Phone Data during the COVID-19 Pandemic

Mr Fabian Kalleitner (University of Vienna) - Presenting Author
Mr David W. Schiestl (University of Vienna)
Mr Julian Aichholzer (University of Vienna)
Mr Georg Heiler (TU Wien)
Mr Tobias Reisch (Medical University of Vienna)

Measures to reduce individual mobility are prime governmental non-pharmaceutical interventions (NPI) to curb infection rates during a pandemic. To evaluate the effectiveness of these efforts and increase their efficiency, it is crucial to understand which factors affect individual mobility. To do that, most studies on individual mobility behaviour rely on self-reported measures from surveys. However, little is known about the external validity of these measures, especially during a pandemic. This study utilizes geodata from a large number of mobile devices to estimate mobility patterns in Austria and compares them to self-reported mobility from the Austrian Corona Panel Project (ACPP), an ongoing panel study on the socioeconomic and psychological impact of the COVID-19. Spanning a period of March 2020 till January of 2021 we test whether subgroup-specific mobility from self-reports relates to changes in mobility observed using mobile phone data. Specifically, our analyses look at subgroup differences between age groups, genders, and regions and how they changed over the course of the COVID-19 pandemic. Finally, we investigate if behavioural changes in the survey self-reports as well as in the mobile phone data, on the one hand, reflect government measures to curb the COVID-19 pandemic and, on the other hand, are able to predict subsequent changes in COVID-19 incidence rates.


The Effect of “Pulsing” on Data Quality: Temporal Evidence on Accuracy, Coverage and Efficiency for European CATI Surveys from 2019 to 2021

Dr Alexandra Castillo (Pew Research Center) - Presenting Author

Download presentation

‘Pulsing,’ or identifying and removing nonworking numbers from phone samples before dialing, is one method in researchers’ toolkit to combat risings costs and shorten data collection timelines. However, little attention has been paid to the accuracy of pulsing over time within and across countries, nor the implications for using ‘pulsed’ sample on data quality. Though efficiency may improve, it’s possible that this process of differentiating working from nonworking numbers may introduce unintended error. The goal of this presentation is to disentangle this tension between survey error and operational outcomes – and provide insights to methodologists and practitioners alike – through an analysis of actual nonworking numbers identified as working (which may decrease dialing efficiency) and actual working numbers identified as nonworking (which may decrease population coverage and increase the risk of noncoverage bias).

For the past three waves of Pew Research Center’s Global Attitudes Project (2019, 2020 and 2021), landline samples from seven European countries have been pulsed. Numbers have been flagged as either “working” or “not working”, but all numbers are dialed using the full call design regardless of the pulsing flag outcome.

Preliminary analyses of numbers dialed from the 2019 and 2020 surveys found that the average accuracy rate of pulsing in these seven countries was 80% in spring 2019 but decreased to 74% in summer 2020, with notable variation within and between countries. Moreover, 21% of landline interviews overall were achieved on numbers that were flagged as nonworking by pulsing in 2019, and 10% were achieved on such numbers in summer 2020. There were, in general, more inaccuracies contributing to inefficiency across these two years than contributing to noncoverage.

This presentation will incorporate data from the latest round of the Global Attitudes Project scheduled to be collected during the spring of 2021. A fourth data point from a fall 2020 Center survey will be added for three European countries – France, Germany and the UK – facilitating further analyses of country-specific pulsing trends before and during the COVID-19 pandemic, which will be a key feature of our presentation.