ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 16:00 - 17:30 Room: N 101


Conducting high quality random probability surveys in Europe - problems, solutions and lessons learnt

Chair Miss Sally Widdop (Ipsos MORI )
Coordinator 1Mr Andrew Cleary (Ipsos MORI)
Coordinator 2Dr Koen Buellens (KU Leuven)
Coordinator 3Dr Ineke Stoop (The Netherlands Institute for Social Research - SCP)

Session Details

Data users and commissioners of research interested in making comparisons across countries require high quality survey data that provide accurate, reliable and valid results. Collecting such data is challenging in any survey, but even more so in a cross-national context where multiple languages, fieldwork practices and national survey climates all come into play.

In particular, varying levels of experience of carrying out random probability surveys as well as practical or logistical factors such as fieldwork capacity, interviewer remuneration, availability of sampling frames, technical capacity for fieldwork monitoring as well as differing quality control measures can all have an impact on the success and cross-national comparability of the survey. Further to this, there is the challenge of narrowing the gap between rising expectations of quality and limited resources.

This session seeks to explore the challenges associated with conducting high quality, face to face random probability surveys in Europe. We aim to bring together national and international fieldwork agencies to present local initiatives and experiences and compare and discuss these.

We are particularly interested in exploring the following areas:
- interviewer training, remuneration and/or methods for controlling the quality of interviewer fieldwork;
- the lack of suitable sampling frames in some countries and/or techniques for ensuring high quality in random route sampling approaches;
- the use of technology for enhancing fieldwork monitoring and/or making interventions during fieldwork
- and the trade-offs practitioners make in response to increasing demands and the rising costs of data collection.

We welcome papers that offer country-specific case studies as well as cross-national examples that illustrate the solutions that have been applied to overcome one or more of these challenges.

Paper Details

1. Quality targets and quality control in the European Social Survey
Dr Ineke Stoop (The Netherlands Institute for Social Research/SCP)
Dr Joost Kappelhof (The Netherlands Institute for Social Research/SCP)
Mr Achim Koch (GESIS - Leibniz-Institut für Sozialwissenschaften)

Cross-national surveys such as the European Social Survey (ESS) aim for high quality and optimal comparability across countries and over time. With regard to data collection ESS pursues these aims through providing detailed specifications for fieldwork to the countries and their national survey agencies. Random sampling is mandatory and national sampling designs are signed off by experts. Interviewing is face-to-face by experienced, well-briefed interviewers. During data collection, close monitoring of progress is required. After the data collection phase, quality is assessed in detail, using both paradata from the ESS main dataset and data from the detailed ESS contact forms comprising contact history information and neighbourhood observations.

Despite all these efforts cross-country differences resulting from practical country level constraints can threaten the ESS aims. For example, in some countries population registers can be used for sampling, in others no sampling frames are available and random route procedures have to be developed. In addition, funds available for fieldwork vary greatly – even given diverging costs of fieldwork. And finally, experience with random sampling, face-to-face interviewing, and strict fieldwork monitoring may not be at the same level across countries and survey agencies.

These factors make it difficult to conduct fieldwork according to the same standards in every country, and realise comparable outcomes. Another problem is that – even given the ESS quality control process – it may be difficult to exactly find out after fieldwork has ended what happened in each country and at every call to sample persons. Even greater challenges need to be overcome, when the aim is not only to monitor and document, but also to actively intervene into data collection operations during fieldwork.

The presentation gives an overview of quality and comparability aims of the ESS, and quality control and assessment tools. It will also show which information is hard to get, which aims are hard to reach, and which improvements are being considered.

Koch, Achim, Annelies G. Blom, Ineke Stoop and Joost Kappelhof (2009) Data Collection Quality Assurance in Cross-National Surveys: The Example of the ESS. Methoden Daten Analysen. Zeitschrift für Empirische Sozialforschung, Jahrgang 3, Heft 2, 219-247.


2. New elements introduced in the sampling strategy of the 4th European Quality of Life Survey (EQLS): challenges and lessons learnt
Ms Daphne Ahrendt (Eurofound)
Ms Eszter Sandor (Eurofound)
Dr Tadas Leoncikas (Eurofound)

The EQLS is a well-known face-to-face multi-stage random probability survey, carried out for the 4th time in 2016 in all 28 EU Member States and 5 Candidate Countries. The sampling strategy used on the 4th EQLS includes a new component: the controlled release of sampling batches.
To minimize the variability in the achieved number of interviews by PSU and maximize response rates interviewers will only be provided with enough addresses/individuals in the original batch to hit the target number of interviews based on an agreed ratio of addresses/individuals to achieved interviews.
The sample release process is the same for both urban and rural PSUs . A ratio of 2:1 addresses/individuals to target interviews will be issued as a first batch in all PSUS except where the ratio provided in the sampling plan is lower, in which case we will issue the first batch based on this ratio (e.g. Macedonia Rural PSUs they will issue 16 address as estimated RR = 70%, ineligibility rate = 10% and target interviews = 10).
Three to five weeks into fieldwork (depending on sufficient progress) countries used their outcome and contact history data to provide a more reliable estimate of the likely RR and thus the total achieved number of interviews. This information was then used to estimate the additional (if any) number of addresses/individuals required to meet the achieved sample size requirements. All calculations and data used for them were independently validated by the central coordination team and approved by Eurofound.
The presentation will focus on the actual implementation of this method: how was sample release managed and recorded, how many batches were released, of what size and in which countries? A second question to be answered will be what the impact of the method was on the response rate.
The success of the method was challenged by a number of factors and the presentation will focus on some of the lessons learnt. The first challenge was slow start/delay of fieldwork in several countries, which resulted in the request for more additional sample than envisaged. This of course raises questions about the quality of the sample such as for instance on what measures were in place to ensure that the interviewers kept up the required efforts to visit each address at least 4 times? Low response has been another challenge where again a far larger number of addresses was needed to complete the required sample size


3. Developing target-specific interviewer training for studies placing high demands on interviewers
Miss Anne Kersting (infas Institut für angewandte Sozialwissenschaft GmbH (Institute for Applied Social Sciences), Bonn, Germany)
Miss Jennifer Weitz (University of Siegen, Germany)

The fact that interview training is an important element in the survey process is undisputed. However, the training form – personal interviewer training, online training or providing written training materials –, the structuring of content, as well as suitable forms of exercise to ensure training success, are still debated. What makes a good training? Which distribution should the different training elements, especially theoretical and practical elements, have? Which measures are suitable in order to prepare and motivate interviewers for field work?
Our paper presents and evaluates the training concept of the school-leaver-sample of the German National Educational Panel Study (NEPS), a panel study based on six age-specific samples (cohorts). The school-leaver sample is a representative sample of students recruited in grade 9 (aged 15-17) in different types of schools in 2010. Students start with paper-based surveys in the classroom. After they leave school, respondents are interviewed (individually) either at home or via telephone. Each year, they are required to report (or update) their biography. In 2013 and 2016, they were asked additionally to participate in competence tests as part of the interview.
Demands placed on the interviewers are high. They must uphold the respondent´s motivation, conduct biographical interviews manoeuvring through complex questionnaires, perform computer-based competence tests, ask standardized questions and document particularities. Each part of the 90-minute interview requires certain interviewer behaviour and a different set of skills and knowledge. Hence, we developed personal interviewer training concepts consisting of several parts. These parts differ in length and detail between different groups of interviewers, to which they are assigned according to previous experience with the school-leaver sample of the NEPS-study. In the one to three day training, multiple training forms are used to guarantee knowledge transfer. Each interviewer receives an accompanying handbook, as well as the presentation and practise materials.
Our paper presents and evaluates the training concept based on evaluation sheets the interviewers complete anonymously after the training. Among other things, the interviewers told us which parts should be treated more or less in detail and how well prepared they felt. Furthermore, they shared their expectations about the main functions of the interviewer training. We cluster the interviewers based on those evaluation sheets and analyse how certain interviewer groups (e.g. more or less experienced interviewers or interviewers with certain characteristics) differ in their expectations and rating of the training, so that we are able to optimize the training concept in a target-specific fashion.


4. Case-Studies on Data-Driven Interviewer Monitoring
Dr Zeina Mneimneh (University of Michigan)
Dr Lars Lyberg (Inizio)
Mr Sharan Sharma (TAM India and University of Michigan)
Mr Mahesh Vyas (Centre for Monitoring Indian Economy)
Mr Frederic Malter (Max-Planck- Institute for Social Law and Social Policy)
Mr Yuchieh Lin (University of Michigan)
Dr Yasmin Altwaijri (King Faisal Specialized Hospital and Research Center)

Interviewers could be a major source of survey error contributing to both bias and variance error components (Blom and Korbmacher, 2013; Davis, Couper, Janz, Caldwell, and Resnicow, 2010; Groves, 1989; Groves et al., 2009) including nonresponse and measurement error (West and Olson, 2010). Such error could be caused by interviewer’s unintentional or intentional deviation from the study protocol. Prevention of such deviation is mainly attempted through careful interviewer training, appropriate remuneration, and supervision methods such recording and evaluating interviews, re-contacting a subset of respondents to verify the information recorded by the interviewer, or observing the interviewer behavior in the field. American Association for Public Opinion Research (AAPOR) recommends that a random 5-15 percent of each interviewer’s work is verified or observed (AAPOR, 2003). Though increasing this percentage allows for better coverage, it is costly and demands a large team of verifiers or evaluators especially if such quality control measures are to be completed shortly after the interview.

To augment such interviewer monitoring procedures (which are typically conducted on a random sample) and target a larger sample of cases worked by specific interviews who require more supervision and evaluation, researchers and survey practitioners have started using data-driven procedures. These data-driven approaches rely on using computer administered interviews and other technological advancement where real-time questionnaire data and paradata (or process data) (Couper, 1998; Kreuter, 2013) are analyzed and quality indicators are identified and displayed. These indicators are then compiled at an interviewer-level to identify interviewers who exhibit certain outlying behavior and who require further follow-up and more targeted quality control interventions. The purpose of such data-driven approaches is to create an efficient process where quality control resources are channeled to potentially troublesome cases and those interviewers who could be contributing to the majority of the interviewer error.

This presentation focuses on a number of case-studies that have used such data-driven approaches using technological innovations or modeling techniques. These case studies illustrate potential methods that could be implemented across a variety of survey designs and cultural contexts to fit the user’s needs. They also demonstrate how technological advances applied during data collection could be used to generate real-time data for quality control interventions. The case studies include the Consumer Pyramids Survey and the Television Audience Measurement in India, the Saudi National Mental Health Survey in the Kingdom of Saudi Arabia, the European Social Survey and the Survey of Health, Ageing and Retirement in Europe, and the Health and Retirement Survey in the US.


5. Improving data quality by optimizing fieldwork management
Mr May Doušak (Junior researcher, member of national ESS team (fieldwork, NC))

Transition from paper and pencil (PAPI) to computer (CAPI) questionnaires brought many possibilities in regard to final data quality. Among other improvements, computerisation changed the locus of control from interviewer to the computer, allowed for validity checks and shortened the time needed to transfer the data from field to the researchers. In order for the agency to fully capitalize on those possibilities, new approaches are needed for the fieldwork departments.
First, type of the device must be chosen – some agencies use computers while others use tablets. Some use small and portable computers (10-12 inch) while others bigger and more powerful. Senior interviewer experience and opinion should also play a vital role in device selection as they are the final users. Some agencies even allow (stimulate?) the respondents to use their own device. Which operating system should be used and how should the settings as well as user profiles be adjusted?
After the devices are chosen and set up, the real work begins. Data encryption, data backups, internet connection and data synchronization need to be set up. The agency must be able to remotely distribute the work (manage the fieldwork) as well as monitor the progress on daily basis. Management software should also be able to read survey software data and produce progress charts that ease the monitoring work.
Those were just a few of the questions we needed to answer while introducing as well as improving our computerised fieldwork process. We’ve also been developing and refining custom software solutions for monitoring, backup, synchronization as well as fieldwork management for the last few years.
Experience has shown that users should have as little system privileges as possible. Computers should all be the same make and model and have embedded 3G modem. If possible, internet connection should be enabled continuously while the synchronization should be continuous and automatic not to cause additional burden to the interviewers. All revisions of data files should be kept on the server while the laptops should save incremental backups in case internet connection fails and something bad happens to the survey data.
In the presentation, we will talk about the key issues and solutions of computerisation. We will also demonstrate the software solutions we use for fieldwork monitoring, sample distribution, synchronization and backup.