ESRA 2013 Sessions

Explaining Interviewer Effects in Interviewer-Mediated Surveys 1Professor Annelies Blom
Researchers are invited to submit proposals for papers at the session "Explaining Interviewer Effects in Interviewer-Mediated Surveys" at the European Survey Research Association conference, July, 15-19, 2013 in Ljubljana. In interviewer-mediated surveys interviewers naturally have great potential to affect data quality. Interviewers compile sampling frames; they make contact and gain cooperation with the sample unit; and they act as mediators between the researcher's questions and the respondent's answers. Their characteristics, attitudes, experience and abilities can affect all stages of the data collection process and interviewer effects may occur. As such they are invaluable and a source of error at the same time.

Therefore, the selection of good interviewers and appropriate training are essential for high-quality surveys. However, still little is known about what constitutes a good interviewer and good training. Understanding the mechanism of interviewer effects requires the availability of information about the interviewers. There are three potential sources of interviewer information: First, the actual interview data and information contained therein about interviewer clustering. Second, paradata automatically collected during the data collection process. This may include information about how the data were collected (e.g. call record data), as well as information on the interview itself (e.g. response times or audio trails). Third, a survey administered to the deployed interviewers may collect data about relevant interviewer characteristics. Such a survey may cover experiences, attitudes, expectations, and general demographics.

This session will focus on research into explaining interviewer effects on various aspects of a survey using one or more sources of information about the interviewer.


Explaining Interviewer Effects in Interviewer-Mediated Surveys 2Professor Annelies Blom
Researchers are invited to submit proposals for papers at the session "Explaining Interviewer Effects in Interviewer-Mediated Surveys" at the European Survey Research Association conference, July, 15-19, 2013 in Ljubljana. In interviewer-mediated surveys interviewers naturally have great potential to affect data quality. Interviewers compile sampling frames; they make contact and gain cooperation with the sample unit; and they act as mediators between the researcher's questions and the respondent's answers. Their characteristics, attitudes, experience and abilities can affect all stages of the data collection process and interviewer effects may occur. As such they are invaluable and a source of error at the same time.

Therefore, the selection of good interviewers and appropriate training are essential for high-quality surveys. However, still little is known about what constitutes a good interviewer and good training. Understanding the mechanism of interviewer effects requires the availability of information about the interviewers. There are three potential sources of interviewer information: First, the actual interview data and information contained therein about interviewer clustering. Second, paradata automatically collected during the data collection process. This may include information about how the data were collected (e.g. call record data), as well as information on the interview itself (e.g. response times or audio trails). Third, a survey administered to the deployed interviewers may collect data about relevant interviewer characteristics. Such a survey may cover experiences, attitudes, expectations, and general demographics.

This session will focus on research into explaining interviewer effects on various aspects of a survey using one or more sources of information about the interviewer.


Fieldwork in interview surveys - professional guidelines and field observations Mr Wojciech Jablonski
This session invites presentations dealing with different aspects of fieldwork in interview surveys - both in person (PAPI / CAPI) and over the telephone (CATI). In particular, we are interested in two issues. On the one hand, we will focus of the fieldwork procedures, guidelines, sets of rules, etc. implemented in order to keep the research process standardized and achieve high quality of survey data. On the other, we will investigate the problem of complying with these principles during the fieldwork.
Topics that might come under this theme include (but are not limited to):
- innovative methods of interviewer training (general, project-specific, and refresher);
- procedures of monitoring and evaluating interviewers' job performance (in particular, detecting deviations from the standardized protocol);
- analysis of interviewers' behaviour during survey introduction and while asking questions / recording answers;
- interviewers' attitude toward their job (specifically the difficulties they encounter while administering the survey, and the solutions they implement in order to overcome these problems).

Fieldwork in Interview Surveys - Professional Guidelines and Field ObservationsProfessor Ivan Rimac


Investigating non respondents: how to get reliable data and how to use themMrs Michele Ernst Staehli
Since response rates are not a sufficient indicator of data quality, as they are not directly linked to nonresponse bias, survey researchers have increased effort to access reliable data about nonrespondents. The purpose is to detect, quantify and ideally to adjust for nonresponse bias. But collecting such data presents several challenges. Besides the problem of their accessibility, the cost of collection and the burden for interviewers as well as for reluctant/refusing sample units, the quality of the data is not simple to achieve. There either are very basic data on the whole sample frame which mostly don't explain much of nonresponse bias, or such data are based on rough evaluation i.e. by interviewers or very short questions raising problems of data quality, or they are collected by distinct surveys raising problems of comparability with the data of respondents.
This session proposes to gather and discuss experiences about the collection of such data with, in particular, the perspective of optimizing nonresponse follow-up surveys.
The main questions we want to address are:
- which kind of data are most useful (type of collection)
- which variables/items best detect nonresponse bias (content of information)
- which is or are the best way(s) to collect reliable data
- how can the quality of such data be assured and improved and
- eventually, how can these data best be analyzed?

We especially welcome papers that compare
- different sources of data (e.g. paradata, observable data, data from nonrespondent follow-ups or surveys)
- different types of information about the nonrespondents (e.g. contextual, socio-demographic, attitudinal)
- different designs for surveying nonrespondents
- different methods of analysis and bias adjustment.
We also look forward to discover unconventional items and innovative designs and methods.


Methodological Aspects of PIAAC, the Programme for the International Assessment of Adult Competencies Professor Beatrice Rammstedt
The Programme for the International Assessment of Adult Competencies (PIAAC) is a newly launched international comparative study governed by the OECD. PIAAC focuses on investigating major competencies in adults aged 16 to 65. These competencies include Literacy, Numeracy, and Problem Solving in Technology Rich Environments. To allow an in-depth analysis of persons with very low literacy scores, an additional module on Reading Components is assessed in this group.
PIAAC is an innovative study in several ways: First, for the first time the competence domain Problem Solving in Technology Rich Environments is assessed in PIAAC. Second, in the Background Questionnaire a module assessing the individual usage of competencies at work and in daily life allows comparisons between the actual skill level and its usage. Finally, from a methodological point of view, PIAAC is the first large scale educational study assessed completely computer-based.
PIAAC is currently conducted parallel in 25 countries from all over the world. Data collection based on large, random and population representative samples took place in 2011 and 2012. First results, e.g. investigating how countries perform in the different competence domains will be published in October 2013. Parallel to that date also the public use file containing data of the 25 countries will be released.
The present session will present the design of the study and the scientific value of the resulting data base. In particular we will present next to the overall design of the study methodological challenges PIAAC is facing. In addition, we will describe the overall implementation of the study standards and guidelines and will focus on challenges in specific countries given the country's constraints with regard to field work, sampling etc. Finally, we will present outlines of analyses the PIAAC data set allows.
We welcome submissions of presentations on these topics.


The evaluation of interviewer effects on different components of the total survey errorProfessor Geert Loosveldt
Although there is a long tradition in the evaluation of interviewer effects in face-to-face survey interviews, some recent papers show a renewed interest in the subject . The reported results also make clear that interviewer effects are still a relevant topic in survey methodology. Characteristic to these publications is that they try to link interviewer effects with other components of the 'Total Survey Error'- framework [e.g. correlation between interviewer induced non-response bias and measurement error (Brunton-Smith et al., 2012); the role of interviewer experience on acquiescence (Olson at al., 2011); the relationship between nonresponse variance and interviewer variance (West et al., 2010)]. In this session about interviewer effects we want to continue this approach which evaluates the impact of interviewers on different types of selection and measurement errors (e.g. unit and item non response, the amount of information, social desirable answers and other types of response tendencies).


Use of Paradata for Production Survey ManagementDr Katherine McGonagle
This session addresses the rise in demand for tools that capitalize on the increasing availability of paradata. Managing surveys efficiently and continuing to collect high quality data amidst declining response rates has further increased the need for rapidly assessing survey instrument performance. This need has led to many innovative approaches for managing surveys in the field, and has given rise to new tools. From cost and response evaluations that facilitate responsive design, to interviewer training evaluations and performance management, and near real-time evaluations of data quality and estimates, the tools available for performing these tasks is expanding. Paradata encompasses a broad spectrum of realized and potential data sources. The methods for presenting these data for use in survey management are a significant and growing area of interest. This session seeks to explore the current and developing areas of paradata use and dissemination for survey management. The session will highlight maximizing the use and usability of paradata in production survey management, provide a forum for discussion of recommendations, and identify gaps between concepts and operationalization. Papers are invited which consider these topics from a variety of perspectives and may include (but are not limited to) the following topics:
- Paradata capture and presentation
- Dashboards
- Tracking systems
- Performance management
- Responsive design systems and tools
- Data quality
- Data estimates
All papers are expected to develop general themes from their experiences, rather than focus on issues solely relevant to their own projects. Especially welcome are joint papers that explore the issues from two (or more) sides of a collaboration, as well as those that bridge traditional boundaries in survey management.


Use of Paradata for Production Survey ManagementMr Kyle Fennell


Using Paradata to Improve Survey Data QualityMiss Anna Isenhardt


Using Paradata to Improve Survey Data Quality 1Dr Oliver Lipps
"Paradata" are measures of the survey data collection process, such as data describing interviewer or respondent behaviour or data available from the sampling frame, such as administrative records. Examples of paradata are call-record data in CATI surveys, keystroke information from CAI, timestamp files, observations of interviewer behaviour or respondents' response latencies. These data can be used to enrich questionnaire responses or to provide information about the survey (non-)participation process. In many cases paradata are available at little additional cost. However, there is a lack of theoretically guided reasoning about how to use available paradata indicators to assess and improve the quality of survey data. Areas which might benefit from the effective use of paradata are:

- Paradata in fieldwork monitoring and nonresponse research: Survey practitioners can for example monitor fieldwork progress and interviewer performance (Japec 2005, Laflamme et al. 2008). They are also indispensable in responsive designs as real-time information about fieldwork and survey outcomes which affect costs and errors (Groves and Heeringa 2006). In methodological research into interviewer (Lipps 2008, Blom et al. 2011) or fieldwork (Lipps 2009) effects, consistent predictors of nonresponse and nonresponse bias (Blom et al. 2010), the jury is still out on the added value of paradata.

- Paradata to understand respondent behavior: Paradata might aid assessing of the quality of survey responses, e.g. by means of response latencies (Callegaro et al. 2009, Stocké 2004) or back-tracking (Stieger and Reips 2010). Research has used paradata to identify uncertainty in the answers given by respondents, e.g., if respondents frequently alter their answers, need a lot of time, or move the cursor over several answer options.

Papers in this session consider all aspects of measuring, preparing and analyzing paradata for data quality improvement in longitudinal as well as cross sectional surveys.

Using Paradata to Improve Survey Data Quality 2Dr Oliver Lipps
"Paradata" are measures of the survey data collection process, such as data describing interviewer or respondent behaviour or data available from the sampling frame, such as administrative records. Examples of paradata are call-record data in CATI surveys, keystroke information from CAI, timestamp files, observations of interviewer behaviour or respondents' response latencies. These data can be used to enrich questionnaire responses or to provide information about the survey (non-)participation process. In many cases paradata are available at little additional cost. However, there is a lack of theoretically guided reasoning about how to use available paradata indicators to assess and improve the quality of survey data. Areas which might benefit from the effective use of paradata are:

- Paradata in fieldwork monitoring and nonresponse research: Survey practitioners can for example monitor fieldwork progress and interviewer performance (Japec 2005, Laflamme et al. 2008). They are also indispensable in responsive designs as real-time information about fieldwork and survey outcomes which affect costs and errors (Groves and Heeringa 2006). In methodological research into interviewer (Lipps 2008, Blom et al. 2011) or fieldwork (Lipps 2009) effects, consistent predictors of nonresponse and nonresponse bias (Blom et al. 2010), the jury is still out on the added value of paradata.

- Paradata to understand respondent behavior: Paradata might aid assessing of the quality of survey responses, e.g. by means of response latencies (Callegaro et al. 2009, Stocké 2004) or back-tracking (Stieger and Reips 2010). Research has used paradata to identify uncertainty in the answers given by respondents, e.g., if respondents frequently alter their answers, need a lot of time, or move the cursor over several answer options.

Papers in this session consider all aspects of measuring, preparing and analyzing paradata for data quality improvement in longitudinal as well as cross sectional surveys.