ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Methods for questionnaire development: SQP, Cognitive Interviewing, and others 2

Session Organisers Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Lydia Repke (GESIS - Leibniz Institute for the Social Sciences)
TimeThursday 20 July, 16:00 - 17:30
Room U6-22

There are various methods and procedures available for evaluating survey items during questionnaire development. They all share the goal to help researchers to identify questions that are flawed, difficult to comprehend, and might lead to measurement error. However, the available methods differ in several aspects, such as timing, the outcome, the amount of data collection needed, and whether respondents, experts, or computer systems are involved.
In practice, considerations like the available time and budget, the target population, and the mode of data collection affect the choice.
As there is still no clear consensus about best practices for question evaluation, this session aims to discuss current questionnaire evaluation practices using different methods and tools. We invite papers that use different types of evaluation methods, such as expert reviews and cognitive interviewing, or the Survey Quality Predictor (SQP; https://sqp.gesis.org).
We invite papers that
(1) exemplify which evaluating method might best be used during questionnaire development;
(2) assess the different evaluation methods with respect to their advantages and disadvantages;
(3) provide information on how methods can best be used in combination.


Keywords: questionnaire development, question evaluation methods, pretesting, cognitive interviewing, SQP, expert reviews, web probing, behavior coding

Papers

The Integration of the Survey Quality Predictor 3.0 into the Questionnaire Development Process – Applications and Potentials

Dr Lydia Repke (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author

Designing questionnaires is said to be an art. It involves knowledge and experience. To make this a more scientific activity, Saris and colleagues developed a practical hands-on tool called “Survey Quality Predictor” (SQP). SQP is an open-access web-based program that predicts the quality of survey questions for continuous latent variables based on the linguistic and formal characteristics of the survey item (e.g., the properties of the answer scale). The underlying prediction algorithm was derived from a meta-analysis of many multitrait-multimethod (MTMM) experiments with more than 6,000 survey questions in 28 languages and 33 countries.
SQP is not intended to replace cognitive pretesting, expert review, or web probing techniques. Instead, it is a complementary tool to help researchers in the development phase of new questionnaires in national and international survey projects. In this presentation, I will show how researchers can use SQP 3.0 to find survey questions for their questionnaires, improve their questions before data collection, and identify discrepancies between the source and translated versions of a survey question. Lastly, I will highlight the collaborative nature of SQP as an ongoing research project and suggest avenues for potential collaboration.


Score Validation via Cognitive Interviewing with Psychological Scales Adapted in Greek

Dr Michalis Michaelides (University of Cyprus) - Presenting Author
Ms Stefani Andreou (University of Reading)

A common practice in the social sciences is to administer self-report scales to measure constructs of interest. Participants provide responses to closed-form questions, which are then used to calculate an overall score to quantify the intended construct. Empirical procedures can be implemented to examine whether the scores obtained are sufficiently reliable and valid, as described in the Standards for Educational and Psychological Testing (AERA, APA & NCME, 2014). A source of evidence that has been neglected in the validation procedures concerns the response processes: how actually participants process a question, and decide on their response. Hubley and Zumbo (2017) have suggested that evidence from cognitive interviews can offer important information about the response process validity of scale scores.

The current study explored the response process validity evidence through cognitive interviewing. Fifteen Cypriot adults completed the Greek adaptations of the 10-item Life Orientation Test-Revised (LOTR), and the 5-item Satisfaction with Life Scale (SWLS). Then, they were interviewed for their thinking processes on each item, and recompleted the two scales.

Based on a coding scheme, the analysis showed that various processes may occur when a respondent reflects on a self-reported item. For LOT-R items, self-referential processing was commonly employed; for SWLS items participants often mentioned life aspects and self-judgments. When recompleting the scale, at least one answer was changed by all participants, suggesting either reconsideration of responses following the in-depth processing during the preceding interview or unreliable, forgetful responding. Responses to the LOTR item “I hardly ever expect things to go my way”, often signaled comprehension difficulties and significant response shifts in the recompletion stage. Validation studies via cognitive interviewing are useful in examining the proper use and interpretation of self-reported scales, and identify unexpected item functioning, particularly after local adaptations.


Evaluating Questions when Time is Tight: Using Multi-Method, Multi-Phase Approaches in a Rapid Survey Environment

Dr Paul Scanlon (National Center for Health Statistics) - Presenting Author

One of the hallmarks of surveys collected by government statistical agencies, particularly in the United States, has been that the questions used on those surveys typically undergo rigorous pre-testing. This is usually accomplished using cognitive interviewing, a relatively quick method that provides in-depth information about the interpretation of survey items, allowing stakeholders to make informed question design decisions. However, as agencies and others expand their program portfolios to include quick-turn-around surveys that emphasize the timeliness of data, where question evaluation fits into the process and what methods are appropriate will need to be reexamined. In anticipation of this epistemological shift, the National Center for Health Statistics has been investigating the use of mixed method, multi-phase question evaluation approaches that can still provide actionable information for both survey programs and data users.

Two case studies showing how a combination of cognitive interviewing, open- and close-ended web probing, and experimental design were used to evaluate data collection approaches will be presented. In the first example, an interleafed and a grouped formatted version of a series of items asking about seven sex education-related topics were compared using a split sample design and web probes. The findings from the experiment and the probes were then analyzed alongside those from a series of concurrent cognitive interviews to provide a detailed explanation of the response processes behind the two data collection approaches. The second example compared three methods for collecting information about respondent disability—two single item approaches and a set of six disability domain-specific questions. Previous cognitive interviewing results were used to construct a series of close-ended probes, which were analyzed alongside the results of a three-way split sample experiment to evaluate the comparative quality of the three methods.