ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Fieldwork Monitoring Tools for Large-Scale Surveys: Lessons from the Field 1

Session Organisers Dr Michael Bergmann (Munich Center for the Economics of Aging (MEA) )
Dr Sarah Butt (City, University of London)
Dr Salima Douhou (City, University of London )
Mr Brad Edwards (Westat)
Mr Patrick Schmich (Robert Koch Institut (RKI))
Dr Henning Silber (GESIS)
TimeTuesday 16th July, 16:00 - 17:00
Room D17

A key challenge in survey research is how to manage fieldwork to maximise sample sizes and balance response rates whilst ensuring that fieldwork is completed on time, within budget and to the highest possible standards. This is particularly demanding when monitoring fieldwork across multiple surveys simultaneously, across time, or across different countries. Effective monitoring requires access to real-time information which can be collected by and shared with multiple stakeholders in a standardised way and responded to in a timely manner. This process often involves researchers who may be several steps removed from the fieldwork, located in a different organisation and even a different country.

Increasingly, fieldwork agencies and survey infrastructures have access to detailed progress indicators collected electronically in the field. Making use of this information requires streamlining the information received by means of fieldwork monitoring systems or dashboards. Developing an effective fieldwork monitoring system is not as straightforward as it may at first appear and raises both methodological and operational questions. Methodological questions include which indicators to use for monitoring, how to combine paradata from multiple sources to generate indicators, and how to use the indicators during fieldwork. Operational questions include how to most effectively present the agreed indicators, which stakeholders the monitoring system should cater for, how often updates should be received, how to implement y interventions, and how the feedback loop between the monitoring team and interviewers in the field should be managed. Thinking about these questions is crucial in ensuring that increased access to real-time data leads to improvements in monitoring and responding to issues in the field. The data should facilitate informed decision-making about how best to realise often competing goals of fieldwork delivery, quality and costs.

This session aims to bring together those working with fieldwork monitoring dashboards or other tools to share their experiences and learning. We welcome presentations from different stakeholders (fieldwork agencies, research infrastructures, academic institutes, etc.) and survey types (national and cross-national, cross-sectional and longitudinal, conducted in any mode). Presentations could outline how the types of methodological and practical issues raised above have been addressed and demonstrate how the choices made have had an impact, good or bad, on capacity to monitor fieldwork. Presentations may also provide systematic reviews of one or more performance indicators results of implementing adaptive and responsive designs informed by monitoring dashboards, and experimental and simulation studies on fieldwork monitoring.

Keywords: fieldwork monitoring; dashboards; progress indicators; adaptive design

Improving Central Monitoring of Decentralised Fieldwork Activities for Cross-National Surveys: The Case of the Fieldwork Management System in the European Social Survey

Mr Roberto Briceno-Rosas (GESIS - Leibniz Institute for the Social Sciences)
Dr Sarah Butt (City, University of London) - Presenting Author
Dr Kappelhof Joost (SCP – The Netherlands Institute for Social Research)
Mr Niccolò Ghirelli (City, University of London)

A key strength of the European Social Survey (ESS) is its emphasis on functionally equivalent survey designs, shared data collection, and data processing protocols. This input-harmonised approach enables robust comparisons across European countries to be drawn. However, data collection activities are conducted by decentralised national teams and survey agencies. This represents a challenge for the central monitoring of the adherence to standards and the resulting data during fieldwork in a timely, consistent, and comprehensive way. To overcome this issue, the ESS has introduced a new electronic Fieldwork Monitoring System (FMS).

In Round 9, ESS countries are required to upload weekly monitoring information to the FMS online portal using a pre-defined template. It provides all ESS stakeholders (survey agencies, national teams, and the central team) with access to timely, shared data on fieldwork progress at the case-level in a standardised format, and generates pre-defined summary indicators across countries. Access to such data has the potential to make monitoring more efficient and effective, and offers new opportunities to study the effectiveness of fieldwork strategies across countries in a comparable way. Additionally, it highlights the challenges inherent in translating this information into better and more responsive fieldwork monitoring across multiple countries.

In this paper, we provide an overview of the monitoring data of the FMS and the analysis it can facilitate. We present real-life examples of the use of FMS by the central team to inform fieldwork monitoring. Furthermore, we reflect on the challenges and limitations of the current tool and available indicators that have been exposed in the process of monitoring fieldwork in more than 25 countries simultaneously. We discuss the implications of these findings for fieldwork monitoring in future rounds of ESS and attempt to generalise lessons for the development of central monitoring on other cross-national surveys with decentralised data collection.


Building valkyRie: An Automated Survey Quality Control Analysis Tool That Generates a Running Database of Results to Compare International Multi-Country and Multi-Mode Test Results Across Field Partners

Mr David Peng (D3 - Designs, Data, Decisions ) - Presenting Author
Mr David Rae (D3 - Designs, Data, Decisions )

Ongoing or post-field quality control via statistical analysis using survey data, and paradata, is something that has long been conducted at D3 for individual survey projects. In 2017, we started an internal initiative to create an automated statistical analysis tool for our staff to run these tests more efficiently. In addition to streamlining the process, our goal was to create a tool that would allow us to keep a running database of the results. This data, in turn, can be used for comparing resulting quality-control test outcomes across countries, modes, and field partners over time. The authors will be presenting some of their experiences building and hosting the tool, as well as preliminary exploratory analysis of the growing database of international multi-country and multi-mode quality control results.


Development of a Multi Country Face to Face Progress, Performance and Quality Monitoring Tool - Leanings from a Multi Country Study

Mr Jamie Burnett (Kantar Public) - Presenting Author

Download presentation

The goal of this tool is to improve compliance with the agreed contact strategy and to ensure quality concerns can be addressed in real time. The tool provides country field teams with real time fieldwork progress, performance and quality metrics based on a combination of sample data, respondent data and paradata from electronic contact data e.g. interim outcome codes, time and date stamps and GPS, which are all collected at each contact attempt. The tool was developed centrally to provide relevant monitoring and quality control metrics at the country, sample point, interviewer and individual address level. We compare, across two years, various fieldwork compliance measures in four European countries to see if there has been an improvement in the level of compliance. The surveys we use for comparison are conducted in the same countries over 2 consecutive years, are of similar length and have the same sample size requirements. Both surveys were briefed in exactly the same way and both had the same conditions on the fieldwork requirements. The results of this comparison show a marked improvement across all four countries in most if not all compliance metrics.