ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Tuesday 16th July 2013, 11:00 - 12:30, Room: No. 14

Using Paradata to Improve Survey Data Quality 1

Convenor Dr Oliver Lipps (FORS, Lausanne)
Coordinator 1Professor Volker Stocke (University of Kassel)
Coordinator 2Professor Annelies Blom (University of Mannheim)

Session Details

"Paradata" are measures of the survey data collection process, such as data describing interviewer or respondent behaviour or data available from the sampling frame, such as administrative records. Examples of paradata are call-record data in CATI surveys, keystroke information from CAI, timestamp files, observations of interviewer behaviour or respondents' response latencies. These data can be used to enrich questionnaire responses or to provide information about the survey (non-)participation process. In many cases paradata are available at little additional cost. However, there is a lack of theoretically guided reasoning about how to use available paradata indicators to assess and improve the quality of survey data. Areas which might benefit from the effective use of paradata are:

- Paradata in fieldwork monitoring and nonresponse research: Survey practitioners can for example monitor fieldwork progress and interviewer performance (Japec 2005, Laflamme et al. 2008). They are also indispensable in responsive designs as real-time information about fieldwork and survey outcomes which affect costs and errors (Groves and Heeringa 2006). In methodological research into interviewer (Lipps 2008, Blom et al. 2011) or fieldwork (Lipps 2009) effects, consistent predictors of nonresponse and nonresponse bias (Blom et al. 2010), the jury is still out on the added value of paradata.

- Paradata to understand respondent behavior: Paradata might aid assessing of the quality of survey responses, e.g. by means of response latencies (Callegaro et al. 2009, Stocké 2004) or back-tracking (Stieger and Reips 2010). Research has used paradata to identify uncertainty in the answers given by respondents, e.g., if respondents frequently alter their answers, need a lot of time, or move the cursor over several answer options.

Papers in this session consider all aspects of measuring, preparing and analyzing paradata for data quality improvement in longitudinal as well as cross sectional surveys.


Paper Details

1. Comparison of quality of web survey and CATI data using unobtrusive response latencies.

Dr Jochen Mayerl (University of Stuttgart, Institute for Social Sciences)

The quality of survey measurements is a main topic of survey research. When interpreting survey data, survey researchers always have to deal with possibly biasing response effects like acquiescence or question order effects, i.e. with problems of validity and reliability. Another possible source of survey bias is the use of different survey modes which could lead to different kinds or sources of biasing factors in response behavior.
As a special type of paradata, unobtrusive measurement of response latencies has the potential to give researchers insights into cognitive processes which are activated while respondents answer to survey questions (e.g. analysis of accessibility of mental associations or the depth of information processing).
In this paper, quality of attitude scales (in terms of reliability, validity, and strength of response effects) are analyzed in a comparative approach, analyzing data of web surveys and a CATI study using the same attitude scales. These attitude scales consist of negative and positive worded items, allowing the analysis of acquiescence effects which are expected to be stronger when respondents answer spontaneously, i.e. when response latencies are fast.
Estimating multiple group and interaction structural equation models, the paper helps to answer questions (1) whether web survey and CATI data are differently or equally biased by response effects, and (2) whether respondents' behavior is identical or not comparing both survey modes. Additionally, from a methodological perspective, it is possible to (3) compare the power of response latency as moderator of data quality of web versus telephone surveys.


2. Identifying Satisficing Respondents in Web Surveys: A Comparison of Different Response Time-Based Approaches

Mr Joss Rossmann (GESIS - Leibniz Institute for the Social Sciences)

Satisficing response behavior is a widely recognized hazard in Web surveys because interview supervision is limited in absence of a human interviewer. Therefore, it is important to devise methods which help to identify satisficing. Some authors have recently proposed to use response latencies as a measure of how much cognitive effort respondents devote to answer survey questions (e.g. Callegaro et al. 2004). Following this line of reasoning, exceptionally short response latencies are conceived as an indication of low cognitive effort, i.e. satisficing, while longer response times indicate more careful cognitive processing. Based on these considerations, this paper discusses several approaches to identify satisficing respondents which make use of the interview duration and response latencies. These paradata have the advantage that they can be used as unobtrusive and direct measures of the depth of cognitive processing. Using data from a cross-sectional Web survey with respondents from a non-probability online panel, indicators for response behaviors are constructed which are commonly assumed to result from satisficing, e.g. non-differentiation in matrix questions or frequently choosing the DK option. Then, analyses are performed to examine whether the response time-based approaches are suited to identify satisficing respondents. Lastly, the results are compared in order to assess which of the approaches performs best. The paper concludes with a discussion and critical reflection of using response time-based approaches in the identification of satisficing and points out further research desiderata.


3. The use of Behavior Coding to Analyze Data Quality in the SOEP Establishment Survey 2012

Ms Alexia Meyermann (Bielefeld University)
Mr Michael Weinhardt (DIW Berlin)
Professor Stefan Liebig (Bielefeld University)
Professor Jürgen Schupp (DIW Berlin)

In 2012 a representative establishment survey of German employers was conducted (N=1600) using f2f interviews. Establishments were sampled based on address information given by employed participants in the Socio-Economic Panel Study (SOEP) and information from both surveys can be linked in order to create a linked employer-employee data set on organizational strategies and labour market outcomes. Paradata was collected at several stages of the survey: In addition to field reports an interviewer survey was conducted, every interview situation was evaluated separately by interviewers, the editing process was reassessed, and around 30 interviews were audiotaped to gain insight into the interviewing process. During our analysis this paradata was used to identify potential threats to data quality on the side of the respondent, the interviewer and the questionnaire items. The audio recordings were analyzed by applying Behavior Coding, whereby all occurring behaviors of interviewers and respondents on the utterance level are fully coded. The codes used for questionnaire items are loosely based on the SQP-codes created by Saris/Gallhofer 2007. In our paper we are going to present the results of our analysis and the data quality problems that were identified. As our methodological approach has not been applied to the case of establishment survey data so far, we will address the specialties that are unique to establishment surveys. Secondly, we aim to discuss, the value of audio-recordings and the Behavior Coding method for the detection of data quality issues.



4. Using Sequence Analysis to Better Understand Interviewer Calling Behaviours: An Example from the UK Understanding Society Survey

Dr Gabriele Durrant (University of Southampton)
Dr Olga Maslovskaya
Professor Peter Smith
Mrs Julia D’arrigo

For interviewer-mediated surveys, researchers have increasingly become interested in how best to use information collected at each call to a sampling unit. However, to date, analysis of interviewer calling behaviour is still limited. Most previous studies have focussed on the final outcome of a response process, e.g. final refusal, rather than the process leading to it. If contact sequences have been considered, often only summary measures from call sequences have been used in response propensity models rather than investigating the contact sequence as a whole. Much of the prior research has focused on the average best times of day and days of the week to establish contact, without controlling for household characteristics and prior call information.

This paper explores the use of sequence analysis to better understand the complex patterns of interviewer calls to housing units. The method, first introduced by Kreuter and Kohler (2009) in the context of nonresponse adjustment, offers a nice way of displaying the sequence of calls and allows the identification of groups with similar sequences using on a distance matrix. The paper develops the use of sequence analysis across calls and separately for each interviewer. Further, the paper explores the use of sequence analysis to inform modelling of complex call patterns such as the identification of characteristics for early and late responders and non-responders. Our modelling strategy makes use of multilevel analysis taking into account the clustering of sampling units within interviewers.

Previous research has focussed primarily on cross-sectional surveys. However, a better understanding of call patterns may be most beneficial for longitudinal surveys where information from the current and previous calls as well as a wealth of information for selected housing units are available. The call record data analysed here comes from a new large-scale longitudinal survey in the UK, Understanding Society.

The findings from this research will inform survey practitioners about interviewer calling behaviours and response processes overall and within subgroups of both interviewers and respondents.