ESRA logo

ESRA 2019 glance program


Recent Developments in Question Testing 2

Session Organisers Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences)
TimeFriday 19th July, 13:30 - 14:30
Room D12

It is universally acknowledged that testing survey questions prior to administering them to respondents is a vital part of a survey as pretesting reduces potential measurement error and helps to improve the quality of the data collected. For question testing, survey methodologists have a broad set of methods at their disposal (cognitive interviews, behavior coding, response latency measurement, vignettes, expert reviews). Recently, innovative techniques and new data sources are added to the survey researcher’s toolbox, such as eye tracking, web probing, mouse movements or crowdsourcing.
So far, few methodological studies have addressed the effectiveness of these newer methods in improving questionnaires and how they compare to traditional pretesting methods.

This session invites papers that…
(1) explore innovative uses of new methods or techniques for question testing;
(2) highlight the relative effectiveness of different pretesting methods
(3) demonstrate how new and existing techniques might best be used in combination (best-practice examples) to offer additional insights.

We also invite presentations discussing (new) question testing methods in a cross-cultural context.

Keywords: Question testing, Web Probing, Cognitive interviewing, Eye Tracking, Pretesting Methods

Combining the Traditional with the Innovative: The Evaluation of E-Cigarette Questions with Cognitive Interviewing and Web Probing

Dr Paul Scanlon (National Center for Health Statistics) - Presenting Author

Download presentation

The term “electronic cigarettes” includes a wide variety of devices used to simulate smoking by heating liquid tobacco products to produce an aerosol or vapor, and have recently become popular across the American public. As a result, the United States’ National Center for Health Statistics (NCHS) evaluated questions measuring e-cigarette use for inclusion on the National Health Interview Survey (NHIS) with a multiple-method study using iterative rounds of cognitive interviewing followed by web probing on a internet-based survey.

First, two versions of the e-cigarette question were cognitively tested: one including several sentences of introductory text defining e-cigarettes and one without the introductory text. Across two rounds of cognitive interviewing, CCQDER found that the term “e-cigarette” was widely understood, indicating that the intro text is unnecessary and burdensome. The cognitive studies also uncovered potential sources of response error; notably that the e-cigarette question was capturing marijuana use, in addition to tobacco product use.

Next, to explore whether these cognitive interview findings generalize to the larger population and to subgroups within the population, NCHS conducted a split-ballot experiment on the inclusion of the intro text using NCHS’ web-based Research and Development Survey (RANDS), which is run on a national probability web panel. Half the sample received the question with the intro text, while the other half received the version without the introduction. Additionally, close-ended web probes were embedded into the questionnaire in order to determine whether or not respondents’ comprehension changed due to the inclusion of the text.

This presentation will provide the full results of this study that combined both traditional and innovative question evaluation methods. It will focus on how the qualitative data from the cognitive interviewing provided the foundation for the web probing and experimental design, and how the data from the three methods is analytically integrated.


Innovations in Survey Design: Joint Cognitive and Usability Pre-Testing

Ms Kathleen Kephart (US Census Bureau) - Presenting Author
Ms Mary Davis (US Census Bureau)
Ms Jasmine Luck (US Census Bureau)

Researchers in the Center for Behavioral Science and Methods at the U.S. Census Bureau has developed a model of iterative joint cognitive and usability testing for web and paper mixed-mode surveys. Typically, cognitive testing is conducted in a paper mode, the wording is finalized, the web instrument is programmed, and then web usability testing is conducted. If a comprehension issue is found during usability testing it may be too late in the survey lifecycle to modify question wording.
Generally cognitive testing is concerned with issues of comprehension, such as whether questions are being consistently interpreted as intended. Usability testing is generally focused on whether respondents can conduct a task efficiently, for instance navigating through the survey, logging in and out, finding the help page.
For multi-mode surveys, certain adaptations of an instrument may be required for a new mode. Joint cognitive and usability testing allows for pre-testing the comparability of instruments in the modes the survey will be administered in. It can also identify whether a cognitive issue is unique to one mode or if it is seen in both modes.
We propose a model of joint cognitive and usability iterative pre-testing that allows us to conduct multiple rounds and pre-test question modifications with adaptations in between rounds.
We have found that iterative joint cognitive and usability testing allows us to more efficiently use staff and resources. Further, we can better isolate whether an issue is related to the wording of the question or the mode of the instrument. While challenges exist to the adoption of this model, CBSM will present creative solutions to some of these challenges.


The Potential of Eye Movements and Pupil Dilations as Indicators for Question Testing

Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author

To collect high-quality survey data, survey designers aim to develop questions that each respondent can understand as intended. A critical step to this end is designing questions that minimize the respondents’ burden by reducing the cognitive effort required to comprehend and answer them. In light of this, eye tracking appears to be a promising technique for identifying problematic survey questions. This paper investigates the potential of eye movements and pupil dilations as indicators for evaluating survey questions. In a laboratory experiment, respondents were randomly assigned to either a problematic or an improved version of six experimental questions. By analyzing respondents’ reading patterns and the cognitive effort respondents invested while answering the questions (operationalized by fixation times, fixation counts and pupil diameters) it was examined whether these parameters could be used to distinguish between the two versions. Identifying the improved version worked best by observing reading patterns, whereas in most cases it was not possible to differentiate both versions on the basis of pupil data. For fixation time and count the findings were mixed. Limitations and practical implications of the findings are discussed.