ESRA logo

ESRA 2019 glance program


Fieldwork Monitoring Tools for Large-Scale Surveys: Lessons from the Field 2

Session Organisers Dr Michael Bergmann (Munich Center for the Economics of Aging (MEA) )
Dr Sarah Butt (City, University of London)
Dr Salima Douhou (City, University of London )
Mr Brad Edwards (Westat)
Mr Patrick Schmich (Robert Koch Institut (RKI))
Dr Henning Silber (GESIS)
TimeWednesday 17th July, 09:00 - 10:30
Room D17

A key challenge in survey research is how to manage fieldwork to maximise sample sizes and balance response rates whilst ensuring that fieldwork is completed on time, within budget and to the highest possible standards. This is particularly demanding when monitoring fieldwork across multiple surveys simultaneously, across time, or across different countries. Effective monitoring requires access to real-time information which can be collected by and shared with multiple stakeholders in a standardised way and responded to in a timely manner. This process often involves researchers who may be several steps removed from the fieldwork, located in a different organisation and even a different country.

Increasingly, fieldwork agencies and survey infrastructures have access to detailed progress indicators collected electronically in the field. Making use of this information requires streamlining the information received by means of fieldwork monitoring systems or dashboards. Developing an effective fieldwork monitoring system is not as straightforward as it may at first appear and raises both methodological and operational questions. Methodological questions include which indicators to use for monitoring, how to combine paradata from multiple sources to generate indicators, and how to use the indicators during fieldwork. Operational questions include how to most effectively present the agreed indicators, which stakeholders the monitoring system should cater for, how often updates should be received, how to implement y interventions, and how the feedback loop between the monitoring team and interviewers in the field should be managed. Thinking about these questions is crucial in ensuring that increased access to real-time data leads to improvements in monitoring and responding to issues in the field. The data should facilitate informed decision-making about how best to realise often competing goals of fieldwork delivery, quality and costs.

This session aims to bring together those working with fieldwork monitoring dashboards or other tools to share their experiences and learning. We welcome presentations from different stakeholders (fieldwork agencies, research infrastructures, academic institutes, etc.) and survey types (national and cross-national, cross-sectional and longitudinal, conducted in any mode). Presentations could outline how the types of methodological and practical issues raised above have been addressed and demonstrate how the choices made have had an impact, good or bad, on capacity to monitor fieldwork. Presentations may also provide systematic reviews of one or more performance indicators results of implementing adaptive and responsive designs informed by monitoring dashboards, and experimental and simulation studies on fieldwork monitoring.

Keywords: fieldwork monitoring; dashboards; progress indicators; adaptive design

Targeted Follow-Up Letters in PIAAC Germany: An Effort to Convert Nonrespondents

Mrs Silke Martin (GESIS Leibniz Institute for the Social Sciences) - Presenting Author
Mrs Anouk Zabal (GESIS Leibniz Institute for the Social Sciences)

Exploring reasons for survey (non-)participation is a central topic in methodological survey research. Numerous fieldwork measures are available in the survey researcher’s toolbox to address sample members and account for their different characteristics and motivations. The challenges of producing high-quality data while minimizing nonresponse and costs have led to innovative approaches to addressing sample members. Responsive/adaptive designs aim to address sample members with different treatments (e.g., variation of incentives, interview length, or administration mode) and consciously leave the one-size-fits-all protocols behind (Groves & Heeringa, 2006; Tourangeau, Brick, Lohr, & Li, 2017; Wagner, 2008).

We will present an adaptive design feature for the conversion of nonrespondents that was implemented in PIAAC Germany (Programme for the International Assessment of Adult Competencies, Cycle 1). Survey implementation was largely determined by strict design specifications (e.g., fixed interview mode and length) and exacting international standards that did not leave much room for adaptive measures. Furthermore, the basic conditional incentive used in PIAAC Germany was quite high (50 euros), and a flexible increase of the incentive amount was not an option. However, an innovative adaptive design feature was attempted in the re-issue phases. Nonrespondents were allocated to one of five subgroups: sample person moved, non-contacts, migrants, low-educated persons (identified via a classification tree analysis), and others. Nonrespondents subsequently received one of five different targeted follow-up letters. Each letter was crafted with the intention to appeal to the specific group of nonrespondents so as to improve their survey response, with a special view to migrants and persons with a low education level, as these groups tended to be underrepresented in the sample. The presentation will illustrate the adaptive procedure and summarize results of this approach.


Using Field Monitoring Strategies to Reduce Attrition Bias in a Panel Study: Application During Data Collection in the Survey of Health, Ageing and Retirement in Europe (SHARE)

Dr Michael Bergmann (Technical University of Munich (Chair for the Economics of Aging), Munich Center for the Economics of Aging (MEA)) - Presenting Author
Dr Annette Scherpenzeel (Technical University of Munich (Chair for the Economics of Aging), Munich Center for the Economics of Aging (MEA))

The Survey of Health, Ageing and Retirement in Europe (SHARE) is a multidisciplinary and cross-national face-to-face panel study exploring the process of population ageing. For the sixth wave of data collection, we applied an adaptive/responsive fieldwork design in the German sub-study of SHARE to test actual possibilities and effects of implementing targeted monitoring strategies during fieldwork. The central aim of this design was to reduce attrition bias by attempting to achieve more equal response probabilities across subgroups. In particular, we test whether the following interventions have an effect and can help to reduce attrition bias:
1) Interviewer bonus incentives for interviews with respondents aged 80 years and older
2) Interviewer feedback on response rates including requests to retrain refusal coding
3) Interviewer support to better reach nursing home respondents
4) Interviewer instructions to optimize the call/contact schedule

We do this by analyzing deviations from average response probabilities of specific subgroups using respondents’ characteristics/answers from SHARE Wave 5 as a benchmark. This “dashboard” of response probabilities allowed for rather immediate feedback to the survey agency and focused actions during fieldwork with regard to specific groups of respondents.
Our analyses show that the applied adaptive/responsive measures had mixed effects: While bonus incentives for 80+ respondents increased the number of interviews for this group especially at the beginning of fieldwork compared to Wave 5, individual interviewer feedback and support had a relatively small impact. Similarly, our requests to shift contact attempts to the evenings only showed a very short-term effect. Based on these results, we evaluate the effectiveness of such designs for different groups of panel members and give practical advises for researchers in directing their efforts by using this kind of additional information.


Evaluating the Use of Data Anlaytics for Field Decisions in the US General Social Survey [GSS}

Professor Colm O'Muircheartaigh (University of Chicago and NORC) - Presenting Author
Ms Holly Hagerty (NORC at the University of Chicago)
Ms Katie Archambeau (NORC at the Univesrity of Chicago)
Dr Chang Zhao (NORC at the University of Chicago)
Mr Ned English (NORC at the Universiyt of Chicago)

For the past 15 years, researchers at NORC have been predicting final response rates for face-to-face studies by building a model based on detailed field disposition histories from previous projects. In addition to providing the overall prediction of response rate, the model permits more informed case releases, early warning of potential production shortfalls, and the potential to test remedies in real time. We show the strengths and weaknesses of the approach during the fieldwork period, and assess the value of this approach subsequent to the survey by reviewing the results of changes in field strategy during the survey period. Projects have included Making Connections [Annie E Casey Foundation]; the National Social Life, Health, and Aging Project [NSHAP; National Institute on Aging], the Survey of Consumer Finances [Federal Reserve], the National Longitudinal Survey of Youth [NLSY; Bureau of Labor Statistics], and the General Social Survey [GSS; National Science Foundation].
The GSS has experienced a significant decline in response rates since the 2014 round, paralleling the general sectoral decline in response rates. This has led to a shortfall in the number of completed cases in the survey. Early indications from the field suggested a further deterioration in 2018. We demonstrate the potential of the model in making decisions about how to balance response rate against number of completed cases, and evaluate the decision using the field outcomes for a range of categories of cases in the 2018 GSS.


CAPI Back-Checking in Research Environments with Poor Internet Connectivity

Mrs Stacey Frank (D3: Designs, Data, Decisions)
Mr Connor Brassil (D3: Designs, Data, Decisions) - Presenting Author

Recently, CAPI data collection has become more common in international survey research. While this has increased transparency and the speed of data delivery via regular data uploads and dashboard access, the PAPI to CAPI transition can paradoxically mean that far less information is available to the field supervision staff who ensure the quality of the work.

Unlike a PAPI interview, where an interviewer can physically hand a completed questionnaire to a supervisor for verification, a CAPI interview is often locked on the device after completion. In locations without reliable internet access, this can mean the data is accessible to nobody until that device comes into WIFI range and the data is transferred off the device.

D3: Designs, Data, Decisions (D3) has worked with field implementers in locations like Afghanistan, India and Mauritania to make the PAPI to CAPI transition. We’ve discovered this change usually involves a complete rethinking of the team’s field management structure. To complete an in-field back-check, they must reconsider basic questions like who checks the questionnaire after it is completed? When is it checked, and which items are verified? Without the paper trail provided by a PAPI questionnaire, all these issues need to be reconsidered.

If field implementers transition to CAPI without considering these issues, they run the risk of efficiently completing fieldwork without being able to verify the data was collected correctly. D3 uses its own proprietary CAPI software called Research Control Solutions. For it, we’ve developed an integrated back-checking feature that offers flexible options for getting questionnaire data into the hands of supervisors even when working with limited internet access. This presentation will discuss the functionality of RCS’s back-checking feature, along with other strategies we’ve used to overcome the CAPI information transfer challenge in the most remote and unconnected corners of the world.