ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Web Probing 1

Session Organisers Dr Katharina Meitinger (GESIS Leibniz Institute for the Social Sciences)
Dr Dorothée Behr (GESIS Leibniz Institute for the Social Sciences)
Dr Michael Braun (GESIS Leibniz Institute for the Social Sciences)
Dr Lars Kaczmirek (University of Vienna)
TimeTuesday 16th July, 16:00 - 17:00
Room D20

Web probing – that is, the implementation of probing techniques from cognitive interviewing in web survey with the goal to assess the validity of survey items – is a valuable addition to the toolbox of (cross-cultural) survey methodologists (Behr et al. 2017). Thanks to the implementation in web surveys, web probing can have large sample sizes which allow for an analysis of response patterns in subpopulations and a prevalence assessment of question problems and themes. The method has already been used to assess measurement instruments for a large variety of topics and methodological research has already addressed several aspects of web probing (e.g., optimal visual text box design [Behr et al. 2013], probe order [Meitinger et al. 2018], nonresponse detection [Kaczmirek et al. 2017], and targeted embedded probing [Scanlon 2016]).
Although web probing has been successfully applied to several substantive and methodological topics, research gaps and methodological challenges remain: Previous studies have shown that web probing can have an overall satisfactory data quality; nevertheless, a methodological challenge is to further reduce item-nonresponse and mismatching responses. There is also a great diversity in the samples used in web probing studies (e.g., quota-based nonprobability samples, crowdsourcing platforms such as MTurk) but so far a discussion is missing on how different samples might affect data quality and which conclusion can be drawn from different data sources. Also, most of the previous web probing studies focused on Western countries and the majority of studies used the method after official data collection to follow-up on problematic items rather than during a pretest. Thus, the full potential of the method has not been explored yet.
For this session, we invite (1) presentations with a substantive application of web probing and (2) presentations that address some of the methodological challenges and considerations of web probing.

Keywords: web probing, cognitive approach, sample

Probe Sequence and Respondent’s Characteristics in Web Probing

Miss Dörte Naber (Universidad de Granada, Universität Osnabrück) - Presenting Author
Professor José Luis Padilla García (Universidad de Granada)

Survey researchers can resort to a consolidated set of pretesting methods: Cognitive Interviewing, Behavior Coding, Focus Groups, etc. to evaluate survey questions developed for traditional survey modes. New administration modes face researchers with new challenges: Applying probing techniques from the traditional Cognitive Interviewing Web Probing is being increasingly used to test web survey questions. In the context of cross-cultural studies where various linguistic questionnaire versions are administered to distinct cultural and linguistic groups, the topic of pretesting becomes even more pressing as the equivalence of items and questionnaires has to be ensured in order to be able to make valid comparisons between groups. Recently, some research has started to scrutinize the effectiveness of different aspects of the Web Probing method to ensure that it results in valid and high quality pretesting data. Inspired by the work of Meitinger, Braun and Behr (2018), in our presentation we will provide evidence on the impact of different probing types and the sequence in which they are presented on several indicators of response behavior, especially on incidences of nonresponse and mismatches as measures for data quality and part of particular methodological challenges. We collected data of 1,000 participants (500 in Germany and 500 in Spain) on items of the 8th European Social Survey Round. Furthermore, we will present a profound analysis of the found effects in selected subgroups, respectively, analyze and discuss on the mutual effect of respondent’s characteristics and probe sequence on indicators of response behavior in the Web Probing context. Hereby, we would like to deepen the understanding of factors that play a crucial role for response behavior in the specific context of Web Probing and we would like to facilitate decisions regarding the practical implication of the method in order to obtain high data quality.


The Impact of Number of Screens and Probe Order on the Response Quality of Probing Questions

Dr Katharina Meitinger (University of Utrecht)
Mr Adrian Toroslu (University of Utrecht)
Mrs Klara Raiber (University of Mannheim)
Professor Michael Braun (GESIS - Leibniz-Institute for the Social Sciences) - Presenting Author

Web probing uses different probe types – such as category-selection probes and specific probes – to inquire about different aspects of an item. Previous research has mostly asked one probe type per item but in some situations it might be preferable to test potentially problematic items with multiple probe types. Previous research already revealed effects of the order of probes on response quality (Meitinger, Braun, Behr, 2018). Besides question order, the visual presentation of probes on one screen versus multiple screens and its interaction with probe order has not been studied yet.
Based on theoretical considerations and previous studies, we hypothesize that respondents prefer to communicate their motivation for the selection of their answer at a closed item (i.e. to respond to a category-selection probe) rather than elaborate on the things that came to their minds when reading an item (i.e. to respond to a specific probe). Thus, they might show a tendency to motivate their response even if this is not asked for, that is in the case of a specific probe. This tendency should be most pronounced if both probes are placed on separate screens. In this case, respondents must assume that they will not get a second chance to communicate the reason for their choice at the closed item. Nevertheless, this tendency should be mitigated, if both a specific and a category-selection probe are presented on one screen.
In this study, we report evidence from a web experiment that was conducted with 532 respondents from Germany in September 2013. In this experiment, we asked respondents two different probes for one item and we manipulated the sequence of probes in each experimental condition and whether they appeared on one or on two separate screens. Both the sequence of probe types and whether they are asked on one or on two screens has an impact.


What Makes a ‘Good’ Web Probing Respondent?

Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)

Cognitive online pretests (using the web probing method) have recently been recognized as a promising pretesting tool for evaluating questions prior to their use in actual surveys. While previous research has shown that they produce similar results to face-to-face cognitive interviews with regard to the problems detected and the item revisions suggested, only little is known about the ideal design and implementation of a cognitive online pretest. For instance, the proportion of item non-response is significantly higher in web probing pretests than in face-to-face cognitive interviews due to the higher burden of answering while lacking a motivating interviewer. Still, the answers of many web respondents are informative and help to identify question problems. This suggests that some respondents are better suited for participating in a cognitive online pretest than others. This presentation addresses this issue by examining what respondent characteristics are associated with high-quality answers to open-ended probing questions in web probing pretests. To this end, we analyze data from several (international) cognitive online pretests and examine whether respondents’ sociodemographics (e.g., sex, age, educational level, nationality) and the device they use for answering the web survey (e.g., PC, tablet, smartphone) have an effect on the quality of their responses. The response quality indicators used include (1) word count per open-ended probe, (2) response times, (3) amount of non-response and uninterpretable answers, and (4) number of drop-outs.