ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 11:00 - 12:30 Room: Q4 ANF2


Question Pretesting and Evaluation: Challenges and Current Practices 1

Chair Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences )
Coordinator 1Dr Ellen Ebralidze (LIfBi – Leibniz Institute for Educational Trajectories)
Coordinator 2Kerstin Hoenig (LIfBi – Leibniz Institute for Educational Trajectories)

Session Details

Prior to data collection, survey questions are typically tested and evaluated in some form of pretesting. Researchers and survey methodologists have a broad and continuously growing set of methods at their disposal. However, there is relatively little empirical evidence of the comparative effectiveness of different pretesting methods. Just as manifold as the methods available for testing survey questions are the practices and (in-house) styles currently used by different institutes such as the GESIS Pretest Lab and large-scale surveys such as the German National Educational Panel (NEPS). A large set of procedures and approaches for planning, conducting and analyzing cognitive pretesting exists, in particular with regard to study design, recruitment, sample design, protocol development, data collection and management, number and experience of interviewers, analysis and reporting of findings. Each of these methods has particular advantages, disadvantages, and costs. In addition to these methodological concerns, pretest projects often face practical challenges such as constraints in time, resources and staff or target populations that are hard to reach.
The aim of this session is to discuss current practices and to share experiences on questionnaire testing and evaluation. We encourage researchers/practitioners in the field to present papers on how they undertake cognitive testing in their day-to-day work addressing the following topics:

Sampling and recruitment
• Sample population
• Sample size
• Recruiting methods (participant pool, panels, snowballing, advertisements, recruitment agencies)
• Incentives

Conduction of cognitive pretests
• Who is doing the testing? / Number and training of interviewers
• Development and (Non)Standardization of interview protocol
• Use of methods outside the lab (online probing, virtual methods)
• Deployment of observers , recording or transcripts of interviews
• Number of iterations

Analysis of cognitive interviews
• Analytical strategies, techniques used, use of formalized coding schemes
• Data management
• Documentation of results
• Analysis software

Evaluation of pretesting methods
• Advantages and disadvantages of competing modes, techniques and procedures
• Establishing standards for the evaluation of different methods
• Multi-method pretest approaches

Paper Details

1. Practices and Challenges of Cognitive Pretesting for the German National Educational Panel (NEPS)
Dr Ellen Ebralidze (LIfBi - Leibniz Institute for Educational Trajectories)
Mrs Kerstin Hoenig (LIfBi - Leibniz Institute for Educational Trajectories)

As one of Europe’s largest panel studies, the NEPS has been set up to describe and analyze the long-term development of educational careers in Germany. Study participants’ ages range from birth to retirement, and additional interviews are conducted with parents and teachers. NEPS features CATI, CAPI, CAWI and PAPI questionnaires. Following central research paradigms and findings from sociological life-course research and life-span psychology, the NEPS distinguishes eight stages of education and concentrates on five relevant theoretical dimensions (so-called “pillars”) for the cumulative processes in educational careers. Each of these stages and pillars is represented by a research group of the NEPS’s interdisciplinary consortium and contributes items for the various studies carried out every year. In the course of item development, cognitive pretesting is conducted by the single research groups, while the project’s central coordination unit provides a participant pool the item developers may use.
The paper describes the practices of the pretesting process in NEPS pillar 3 (“Social Inequality and Educational Decisions Across the Life Course”) and highlights the challenges that arise in pretesting within the context of a large-scale multi-method panel study, from recruitment to interviewing, analysis, and communication and documentation of results. A spotlight is set on the general challenge of maintaining a participant pool for a panel survey.


2. Question Pretesting Practices in a German PhD Panel Study
Ms Susanne de Vogel (German Centre for Higher Education Research and Science Studies (DZHW))
Ms Gesche Brandt (German Centre for Higher Education Research and Science Studies (DZHW))
Mr Kolja Briedis (German Centre for Higher Education Research and Science Studies (DZHW))
Mr Steffen Jaksztat (German Centre for Higher Education Research and Science Studies (DZHW))

The German Centre for Higher Education Research and Science Studies (DZHW) set up a panel study to examine the learning conditions, career entry and career development of doctorate holders in Germany. Therefore, we conducted a full sample survey of all those who successfully completed their PhD at a German higher education institution (HEI) in 2014. Altogether about one fifth of the population took part in the initial wave (N=5.411). The panel has continued and will continue with further waves (online surveys) taking place every year.
The project mainly aims at providing a data set that enables particular subject-specific analyses and comparisons between different forms of doctorate studies (for example doctorates within a research assistant position, graduate school or grant program). Moreover, it intends to collect detailed longitudinal data that especially allow for history event analyses of the career paths and professional development of PhD holders.
To control and assure the quality of the survey instruments, cognitive pretesting is - among various quantitative testing approaches - an integral part of our questionnaire development process. Within the scope of our panel study, cognitive pretesting focuses on the following questions: 1. Are all respondents able to answer the questions and understand them as intended by the researchers? 2. Are all respondents able to handle the online tools we use to track their career stations? 3. Do respondents with different contextual backgrounds (that is subject and form of doctorate studies) understand the questions in the same way? 4. Are all important context-specific aspects covered in the questionnaire?
This paper presents the current practices of cognitive pretesting in our PhD panel study project, addressing the recruitment of participants, the methods applied, the conduction of the testing as well as the analysis and documentation of the results. Using practical examples we want to illustrate the challenges we faced in our work as well as the benefits we see for our data quality.
Participants for the pretest were primary recruited through the Leibniz University Graduate Academy in Hannover (the local university). In order to get a well-balanced sample regarding subject, form of doctorate studies and gender, additional participants were recruited through snowballing or internet research. Every participant received an incentive.
To simulate the interview situation of our main survey, the participants went through our questionnaire online. After every question, the interviewer verbally inquired information regarding their understanding of the questions and problems that might have occurred during the response. Thereby, interviewers mainly followed a standardized interview guideline using various questionnaire techniques such as paraphrasing, probing and confidence rating. To ensure an appropriate analysis, the cognitive interviews were recorded anonymously in writing and on tape.
The results of the cognitive testing were then compiled to draw implications for modifying our survey instruments. After another quantitative pretest, the thoroughly tested and clearly improved questionnaire is finally used in the main survey.


3. Cognitive interviewing to test survey instruments to reduce measurement error – real life examples
Ms Karen Kellard (Australian National University)

The cognitive testing of survey instrumentation – through cognitive interviewing – is a primarily qualitative method that examines (question by question) people’s comprehension, retrieval, judgement and response to survey questions and response frames. The cognitive interview is generally carried face-to-face with a small sample of respondents who broadly reflect the population of interest. Respondents are asked to ‘think aloud’ in their responses to allow the researcher to identify potential errors or confusion in the interpretation of question itself, or in the response categories offered to ensure that the question’s intent is consistent with the answers provided.

This paper will present the current practice within the Social Research Centre (a research company owned by the Australian National University) to testing survey questions and instruments. Cognitive interviewing is conducted by specialist qualitative researchers (in collaboration with the quantitative team and questionnaire writers) at the Social Research Centre, and is carried out face-to-face with a relatively small sample. The approach adopted utilises the ‘think aloud’ technique, exploring comprehension, retrieval, judgement and response through a ‘Total Survey Error’ lens. Using real-life examples, the paper will reflect on the efficacy and effectiveness of the approach in an environment that that is typically constrained by time and resources.

The paper will walk through the current practice of the team, covering both practical and theoretical aspects of the process, which has been refined and adapted along the way to work within a fast-paced survey environment that is typically characterised by constraints on time and resources, whilst striving to deliver the highest possible quality outputs. The paper will cover aspects such as:

• Recruiting respondents and determining a sample size and composition
• ‘Training’ the respondents on the cognitive interviewing process (and how it is different to what may be perceived as a ‘normal’ interview)
• Location of interviewing (including the feasibility of incorporating usability testing within the cognitive interviewing process)
• Skills and experience of the researchers conducting the cognitive interviews – how to ensure they have the appropriate skill mix required (qualitative interviewing skills, questionnaire design skills, rapport development and so forth)
• Development of discussion guides and data capture tools
• Recording of information (including the use of audio recorders, note-takers and observers)
• Coding and analysis of data from the interviewing process, including approaches to the organisation, coding (use of Nvivo and other techniques) and analysis of data
• Reporting of findings into user-friendly, client-friendly documents with clearly articulated findings and recommendations.

This paper will reflect on the value of this cognitive interviewing approach (including strengths and weaknesses), through providing ‘real life’ examples of survey instruments (questions and associated materials) that have been cognitively tested at the Social Research Centre. Looking through a ‘Total Survey Error’ lens, these examples will also show how measurement errors can occur through respondent misunderstanding or misinterpretation of the questions being asked, or the response option choices that they are required to make.


4. US Statistical System Standards and Guidelines for Cognitive Interviewing Studies
Dr Kristen Miller (National Center for Health Statistics)
Dr Paul Scanlon (National Center for Health Statistics)

In Fall 2016, the United States Office of Management and Budget—in its role as coordinator of the Federal statistical system under the Paperwork Reduction Act—issued standards for cognitive interviewing studies as an addendum to its Statistical Policy Directive No. 2, Standards and Guidelines for Statistical Surveys. The addendum, Standards and Guidelines for Cognitive Interviews is intended to ensure that the results of statistical surveys sponsored by the Federal Government are as reliable and useful as possible while minimizing respondent burden. This presentation will discuss the standards as well as the rationale and decision-making processes for developing the criteria.

The Addendum provides seven standards for cognitive interviews conducted by, or on behalf of, the US Federal government for statistical purposes, including the evaluation of a survey, instrument, or data collection method. These standards pertain to the design, conduct, analysis and publication of cognitive interview studies. The seven standards are presented individually with accompanying guidelines that represent best practices in fulfilling the goals of the standard. The document is intended to provide guidance on the preferred methods for all agencies conducting cognitive interviews, with the recognition that resource or other constraints may prevent all guidelines from being followed in every study. Agencies are encouraged to develop additional, more detailed standards focused on their specific survey question evaluation activities. Additionally, these standards and guidelines are based on the current state of knowledge about cognitive interview practices. Agencies are encouraged to conduct sound empirical research to strengthen the guidelines included in the document so as to further improve the quality of cognitive interview studies.