ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 14:00 - 15:30 Room: Q2 AUD3


Online probing: Cognitive interviewing techniques in online surveys and online pretesting 2

Chair Dr Katharina Meitinger (GESIS Leibniz Institute for the Social Sciences )
Coordinator 1Dr Dorothée Behr (GESIS Leibniz Institute for the Social Sciences)
Coordinator 2Dr Lars Kaczmirek (GESIS Leibniz Institute for the Social Sciences)

Session Details

Online probing is a cognitive interviewing technique which can be used in online surveys and is especially useful in cross-cultural research (see Willis 2015 for a research synthesis on cross-cultural cognitive interviewing). The main advantages are: large sample sizes, explanation of response patterns in subpopulations, possible evaluation of prevalence of question problems and themes, higher likelihood of identifying problems during pretesting, and higher anonymity. Online probing is a fully scripted approach and the procedure is highly standardized (Braun et al. 2015; Meitinger & Behr 2016). Automatic on-the-fly analysis and coding of answers during the interview is also possible which can be used to ask automatically issued follow-up questions (for example to detect and reduce item nonresponse, Kaczmirek, Meitinger, Behr, forthcoming).
Online probing has already been applied to reveal diverging or overlapping interpretations and perspectives with regard to a variety of substantive topics, such as gender attitudes (Behr et al. 2013), xenophobia (Braun, Behr, & Kaczmirek 2013), civil disobedience (Behr et al. 2014a), satisfaction with democracy (Behr & Braun 2015), health (Lee et al. forthcoming), and national identity (Meitinger & Behr 2016).
Several methodological studies have addressed the optimal design and implementation of online probing, e.g., on the size of answer boxes (Behr et al. 2014), on sequence effects of multiple probes (Meitinger, Braun, Behr, forthcoming) and on its feasibility for Amazon MTurk (Fowler et al. 2016).
Although online probing has been successfully applied to several substantive and methodological topics, several research gaps remain. For example, due to the large sample size and qualitative nature of the probes, data analysis is rather work-intensive and time-consuming. Also, most of the previous online probing studies focused on Western countries and the majority of studies used the method after official data collection to follow-up on problematic items. Thus, the full potential of the method has not been explored, yet.
For this session, we invite papers on the method of online probing for substantial research and as part of pretests or methods research, and studies that compare online probing with other pretest methods. We especially welcome (1) presentations with a substantive application of online probing and (2) presentations that address some of the methodological challenges and considerations of online probing

Paper Details

1. Online Probing of the LFS Questionnaire
Dr Matea Paškvan (Statistics Austria)
Mr Marc Plate (Statistics Austria)

Cognitive interviewing is captured as the golden standard to improve and evaluate questionnaires. However, this method has also its downsides such as small sample sizes, high costs and the risk of interviewer effects. Online probing as a new development in survey methodology was implemented to address these problems (Behr et al., 2012, 2014). Online probing is normally realized using a web questionnaire in which respondents are asked cognitive probes after they have answered the closed item. Recent research indicates that online probing is an effective method detecting problematic questions (Behr et al., 2012; 2014; Meitinger & Behr, 2016). However, until now most research was based on attitudinal-questions (civil disobedience, gender ideology etc.), raising the question if online probing is also suitable for factual-questions and in detail, which probing technique may be the best for factual-questions in order to accomplish answers with the most analytical potential.
The present contribution addresses these questions by implementing online probing for the Labour Force Survey (LFS). Moreover, this contribution is to our knowledge the first, comparing the effectiveness of different online probing techniques (confidence rating, comprehension and category-selection probes) for factual-questions.
The sample was recruited from the LFS CATI-sample. Wave 1 of data collection is completed, wave 2 is actually realized. In sum, 145 respondents completed the web questionnaire until now. In wave 2 another 150 completed questionnaires are expected.
Results show that depending on the question-type, different probing techniques may be preferred. Asking the ISCO question confidence rating yield slightly more characters compared to category-selection probes (46 vs. 44 characters). Contradictory, category-selection probes compared to confidence rating and comprehension probes (48 vs. 41 vs. 33 characters) may be better suited for the question of respondents’ actual labour status (WSTATOR). Comparing confidence rating with a non-specific probing question, results of the NACE-question show that confidence rating fits the question best (45 vs. 34 characters). To test for significance, presented results will be expanded by data of wave 2. Furthermore, results will be enriched by an analysis of the quality of the given answers.
In sum, results indicate that online probing is a valuable tool to improve factual-questions. However, online probing does not provide the possibility to ask proper follow-up probes which limits the results by far. Thus, we believe in line with others (Behr et al., 2014) that traditional methods such as face-to-face cognitive interviews may keep the golden standard when it comes to the improvement and the evaluation of factual-questions. However, online probing may be used as a possible solution to get results faster, at a reduced rate and from a broader population. Hence, we conclude that online probing can be seen as a well-established tool for an initial or follow up diagnosis of questions, yet, the in-depth analysis should be completed by face-to-face cognitive interviews.


2. Methodological Considerations for the Use of Close-Ended Online Probes
Dr Paul Scanlon (National Center for Health Statistics)

Embedding cognitive probes in online surveys is a relatively low-cost method that allows question and questionnaire designers and evaluators to obtain a large amount of cognitive information in an efficient manner. This is particularly the case in large-scale national and cross-national surveys, where the geographic limitations of typical face-to-face cognitive interviewing samples are more pronounced. The majority of methodological work on online probing has focused on the use of open-ended probes, which are designed to collect qualitative information that is comparable to that which is obtained from cognitive interviewing (Behr et al 2014, Fowler et al 2016, Meitinger and Behr 2016).

Besides open-ended probes; however, more structured, close-ended probes should also be considered when designing online probing studies. Unlike open-ended ones, close-ended probes do not produce primary qualitative information, but rather rely on and expand the results of previous qualitative studies (such as a cognitive interviewing project). In doing so, they present a more streamlined and less burdensome question to survey respondents, while allowing for the results of qualitative question evaluation studies to be extrapolated to a full survey population (Maitland et al 2013, Scanlon 2016).

Unfortunately, close-ended online probes have been explored in much less methodological detail than open-ended ones. This presentation will attempt to close this gap by presenting some early methodological results and suggesting new lines of inquiry using data from multiple rounds of the National Center for Health Statistics’ (NCHS) Research and Development Survey (RANDS), a web survey conducted on a probability panel of American adults. Outcome measures such as probe response rates, item non-response and breakoff rates, and category selection behaviors will be used to explore issues such as probe placement, repetition and answer category design. Building on the results of these early methodological studies, a few best practices in the use of these embedded, close-ended probes will be presented, illustrating how their use can fit into wider national and cross-national question evaluation studies.


3. Use of closed probes in a probability panel to validate cognitive pretesting
Professor Nick Allum (University of Essex)
Mr Matt Shapley (University of Essex)
Mr Curtis Jessop (NatCen Social Research)
Ms Sophie Pilley (NatCen Social Research)

In standard cognitive pretesting, interviewers ask respondents to ‘think aloud’ and to answer verbal probes. The purpose is to understand how respondents comprehend and respond to survey questions, with the aim of fixing problems and enhancing data quality. Typically, small samples of five to fifteen participants are used and questions revised (or not) in light of what is found. Although cognitive testing is well-established, quite serious questions remain as to its efficacy, which primarily have to do with the small Ns involved. Firstly, it is possible that not all important problems with a survey item will be uncovered by a small number of interviews. Recent research suggests that indeed many problems may routinely be missed when small samples are employed (Blairand Conrad 2011). Secondly, it is generally not known how significant or widespread such problems might be for the study population as a whole. Thirdly, and following on from the second, it is not known how changes made to questions affect data quality. Web surveys have begun to be used to carry out cognitive testing online, using open ended probes, in order to solve some of these issues (Behr et al), but in no case to our knowledge has a probability-based design been used. In this study, we run a randomised experiment on the NatCen online probability-based panel survey. We use closed-ended probes to evaluate revisions made to survey items after they have undergone standard cognitive pretesting. Analysis suggests modest success for standard cognitive testing methods, at least in the cases we examine.