ESRA logo

ESRA 2019 glance program


Web Probing 2

Session Organisers Dr Katharina Meitinger (GESIS Leibniz Institute for the Social Sciences)
Dr Dorothée Behr (GESIS Leibniz Institute for the Social Sciences)
Dr Michael Braun (GESIS Leibniz Institute for the Social Sciences)
Dr Lars Kaczmirek (University of Vienna)
TimeWednesday 17th July, 14:00 - 15:00
Room D13

Web probing – that is, the implementation of probing techniques from cognitive interviewing in web survey with the goal to assess the validity of survey items – is a valuable addition to the toolbox of (cross-cultural) survey methodologists (Behr et al. 2017). Thanks to the implementation in web surveys, web probing can have large sample sizes which allow for an analysis of response patterns in subpopulations and a prevalence assessment of question problems and themes. The method has already been used to assess measurement instruments for a large variety of topics and methodological research has already addressed several aspects of web probing (e.g., optimal visual text box design [Behr et al. 2013], probe order [Meitinger et al. 2018], nonresponse detection [Kaczmirek et al. 2017], and targeted embedded probing [Scanlon 2016]).
Although web probing has been successfully applied to several substantive and methodological topics, research gaps and methodological challenges remain: Previous studies have shown that web probing can have an overall satisfactory data quality; nevertheless, a methodological challenge is to further reduce item-nonresponse and mismatching responses. There is also a great diversity in the samples used in web probing studies (e.g., quota-based nonprobability samples, crowdsourcing platforms such as MTurk) but so far a discussion is missing on how different samples might affect data quality and which conclusion can be drawn from different data sources. Also, most of the previous web probing studies focused on Western countries and the majority of studies used the method after official data collection to follow-up on problematic items rather than during a pretest. Thus, the full potential of the method has not been explored yet.
For this session, we invite (1) presentations with a substantive application of web probing and (2) presentations that address some of the methodological challenges and considerations of web probing.

Keywords: web probing, cognitive approach, sample

Web Probing for Survey Pretesting – How Does Data Quality & Problem Detection Compare to Cognitive Interviews?

Mr Andrew Caporaso (Westat) - Presenting Author
Mrs Hanyu Sun (Westat)
Ms Terisa Davis (Westat)
Mr David Cantor (Westat)

This research will explore the utility of web probing as a method for pretesting survey questions.
Web probing allows researchers to receive rapid feedback from a large and geographically dispersed pool of participants by administering survey items and follow-up probes over the internet. With a larger pool of respondents, web probing methods increase the chances of reaching participants with rare characteristics (e.g., minorities or people with certain health conditions) who would be challenging to recruit using conventional methods. While web probing presents a promising new method for pretesting questions, there are a number of features that differ from cognitive interviewing which may impact data quality such as the absence of an interviewer, expected interview length, and mode of administration (online versus in-person with a paper survey).
We used web probing to evaluate 11 questions which were also being evaluated by cognitive interviewing during the same time period. The tested items included both behavioral and attitudinal questions on a variety of health related topics. Each question was followed by open and closed-ended probes aimed at determining comprehension of the question content and difficulty in answering. Findings from 391 online participants will be compared to those of 15 in-person cognitive interviews. Web probing participants were recruited nationally through Amazon’s Mechanical Turk (mTurk). In-person cognitive interviewing participants were recruited through local resources. The two pretesting methods will be compared for data quality, including sample compositions, item missing rates and the extent to which probe responses were sufficient for evaluating the respondents answers to survey questions. The methods will also be compared with respect to problems identified in the first three stages of the response process (comprehension, retrieval and judgement), and concordance in self-reported difficulty of survey items.


Investigating the Effect of Different Methods of Online Probing on a Researchmessenger Design and a Regular Responsive Survey

Dr Vera Toepoel (utrecht university) - Presenting Author
Dr Peter Lugtig (utrecht university)
Dr Marieke Haan (groningen university)
Dr Bella Struminskaya (utrecht university)
Miss Anne Elevelt (utrecht university)

Relevance & Research Question: In recent years, surveys are being adapted to mobile devices.. An innovative way to administer questions is via a researchmessenger, a whatsapp-like survey software that communicates as one does via whatsapp (see www.researchmessenger.com). In this study we compare different methods of online probing (see Moerman, 2010) in a researchmessenger layout to a responsive survey layout. We expect more effects of online probing (longer answers and more themes) in the researchmessenger survey compared to the regular survey.

Methods & Data: The experiment has been carried out in 2018 using panel members from Amazon Mechanical Turk in the United States. Respondents were randomly assigned to the researchmessenger survey or the regular responsive survey. We used four blocks of questions containing questions about politics, news, sports, and health. To investigate question order effects-and possible respondent fatigue dependent on the type of survey- we randomly ordered blocks of questions. 1728 respondents completed the survey.

Results: We will investigate the effect of different probing techniques based on Moerman (2010) by comparing the results of the researchmessenger and regular survey.. Respondents could self-select into a particular device. We will also compare results obtained via different device. We will show a video of the layout of both the researchmessenger and regular survey.

Added Value: The experiment identifies recommendable design characteristics for an online survey in a time were survey practitioners need to rethink the design of their survey since more and more surveys are being completed on mobile phones and response rates are declining.


Moving Web Probing Forward: An Examination of Probe Type and Formatting

Dr Paul Scanlon (National Center for Health Statistics) - Presenting Author

Download presentation

The National Center for Health Statistics has integrated web probing into its question evaluation efforts, and in particular uses close-ended web probes in order to extrapolate the findings from cognitive interviews to the wider population and examine the distribution of problems and patterns of interpretations across population subgroups. However, more methodological work is needed to refine the method. Analysis of prior web probing studies led to the hypotheses that much of the observed differential item non-response between web probe and non-web probe items was related to the facts that the web probes asked about potentially cognitively difficult information and that they were nearly all formatted as “select-all-that-apply” questions (many with a large number of answer categories). The most recent round of NCHS’ methodological web survey aimed to confirm these findings.

First, instead of only employing comprehension probes, NCHS included a wider variety of probes in its Winter 2019 survey, including category selection and knowledge probes. Given this variation, an analysis of the differential item non-response between these various types of probes was possible. Secondly, while previous rounds’ probe designs were based on the idea that select-all probes more closely matched the type of data collected in cognitive interviewing, other research has indicated that forced-choice items encourage deeper cognitive processing. In order to systematically examine the impact of formatting on web probe response, an experiment was also embedded in the winter survey. Half of the respondents were administered select-all-that-apply probes, whereas the other half were administered forced-choice probes (formatted as yes/no grids on web browsers and individual yes/no items on mobile browsers for optimization purposes). This presentation will report the results and implications of these analyses, and suggest ways that web probing can be optimized for inclusion in questionnaire pre-tests.