ESRA logo

ESRA 2021 Program at a glance



Papers from the MDA special issue on open-ended questions

Session Organisers Professor Matthias Schonlau (University of Waterloo)
Dr Dorothée Behr (GESIS)
Professor Katharina Meitinger (University of Utrecht)
Dr Neuert Cornelia (GESIS)
TimeFriday 2 July, 16:45 - 18:00

This session presents papers published in the recent special issue on the analysis of open-ended questions in Methods, Data, Analysis (MDA) as well as one additional paper by the editors. The special issue was edited by Dorothée Behr, Katharina Meitinger, Cornelia Neuert and Matthias Schonlau.

Keywords: open-ended questions, text data

Open-ended versus Closed Probes: Assessing different formats of web probing

Dr Cornelia Neuert (GESIS) - Presenting Author
Professor Katharina Meitinger (University of Utrecht)
Dr Dorothée Behr (GESIS)

The method of web probing integrates cognitive interviewing techniques into online surveys and is increasingly used to evaluate questions by collecting data on respondents’ answer processes. Typically, web probes are administered directly after the question to be tested as open-ended questions with text fields. While the use of open-ended probes in web surveys has shown promising insights in discovering problematic survey items and inequivalence across countries, it is generally acknowledged that open-ended questions are more burdensome to answer for respondents and face more item-nonresponse.
A second possibility of administering probing questions is in a closed-ended question format. The response options for the closed-ended questions are developed by relying on the findings of previous collected qualitative cognitive interviewing data. The implementation of closed-ended probes drastically reduces the costs and burden involved in the data processing and analysis stages compared to open-ended questions, because it omits the development of coding schemes, the coding of the responses and reduces the response burden of answering. However, is still an open question whether the insights gained into item validity when implementing closed probes are comparable to the insights gained when asking open-ended probes and whether closed probes are equally suitable to capture the cognitive processes for which traditionally open-ended probes are intended.
In this paper, we address the following two research questions: 1) Are open-ended probes and closed probes comparable with regard to the substantive themes provided? 2) Are open-ended probes and closed probes comparable with regard to data quality?
Based on a sample of 1,600 German panelists of an online access panel, we conducted a web experiment comparing the responses of closed-ended vs open-ended probing questions on three questions under consideration. The study was fielded in July 2019.


How much is a box? The hidden cost of adding an open-ended probe to an online survey

Dr Malte Lübker (Hans Boeckler Foundation ) - Presenting Author

Probing questions, essentially open-ended comment boxes that are attached to a traditional closed-ended question, are increasingly used in online surveys. They give respondents an opportunity to share information that goes beyond what can be captured through standardized response categories. However, even when probes are non-mandatory, they can add to perceived response burden and incur a cost in the form of lower respondent cooperation. This paper seeks to measure this cost and reports on a survey experiment that was integrated into a short questionnaire on a German salary comparison site (N = 22,306). Respondents were randomly assigned to one of three conditions: a control without a probing question; a probe that was embedded directly into the closed-ended question; and a probe displayed on a subsequent page. For every meaningful comment gathered, the embedded design resulted in 0.1 break-offs and roughly 3.7 item missings for the closed-ended question. The paging design led to 0.2 additional break-offs for every open-ended answer it collected. Against expectations, smartphone users were more likely to provide meaningful (albeit shorter) open-ended answers than those using a PC or laptop. However, smartphone use also amplified the adverse effects of the probe on break-offs and item non-response to the closed-ended question. Despite documenting their hidden cost, this paper argues that the value of the additional information gathered by probes can make them worthwhile. In conclusion, it endorses the selective use of probes as a tool to better understand survey respondents


Interviewers’ and Respondents’ Joint Production of Response Quality in Open-ended Questions. A Multilevel Negative binomial Regression Approach

Dr Alice Barth (Unversity of Bonn) - Presenting Author
Dr Andreas Schmitz (University of Bonn)

Download presentation

Open-ended questions are an important methodological tool for social science researchers, but they suffer from large variations in response quality. In this contribution, we discuss the state of research and develop a systematic approach to the mechanisms of quality generation in open-ended questions, examining the effects from respondents and interviewers as well as those arising from their interactions. Using data from an open-ended question on associations with foreigners living in Germany from the ALLBUS 2016, we first apply a two-level negative binomial regression to model influences on response quality on the interviewer and respondent level and their interaction. In a second regression analysis, we assess how qualitative variation (information entropy) in responses on the interviewer level is related to interviewer characteristics and data quality. We find that respondents’ education, age, gender, motivation and topic interest influence response quality. The interviewer-related variance in response length is 36%. Whereas interviewer characteristics (age, gender, education, experience) do not have a direct effect, they impact on response quality due to interactions between interviewer and respondent characteristics. Notably, an interviewer’s experience has a positive effect on response quality only in interaction with highly educated respondents.


Coding text answers to open-ended questions: human coders and statistical learning algorithms make similar mistakes

Miss Zhoushanyue He (University of Waterloo)
Professor Matthias Schonlau (University of Waterloo) - Presenting Author

Text answers to open-ended questions are often manually coded into one of several pre-defined categories or classes. More recently, researchers have begun to employ statistical models to automatically classify such text responses. It is unclear whether such automated coders and human coders find the same type of observations difficult to code or whether humans and models might be able to compensate for each other’s weaknesses. We analyze correlations between estimated error probabilities of human and automated coders and find: 1) Statistical models have higher error rates than human coders 2) Automated coders (models) and human coders tend to make similar coding mistakes. Specifically, the correlation between the estimated coding error of a statistical model and that of a human is comparable to that of two humans. 3) Two very different statistical models give highly correlated estimated coding errors. Therefore, a) the choice of statistical model does not matter, and b) having a second automated coder would be redundant.