ESRA logo

ESRA 2023 Preliminary Glance Program

All time references are in CEST

Overcoming challenges in mobile questionnaire design 3

Session Organiser Ms Joanna d'Ardenne (NatCen Social Research)
TimeThursday 20 July, 14:00 - 15:30
Room U6-01c

Research commissioners are increasingly interested in offering an online alternative to surveys that have historically relied on face-to-face data collection. Transitioning a CAPI survey to an online survey can pose practical challenges when it comes to questionnaire design. One challenge is the principle of mobile-first design. CAPI questionnaires will have been designed to be completed by a trained interviewer using a large screen device. Web questionnaires, in contrast, must be accessible to novice users, including those who are participating on small screen devices such as smartphones. Poor mobile design can increase respondent burden and frustration. This increases the risk of missing data and break-off.
Although it is straightforward to render simple questions on mobile screens, some CAPI questions require redesign work to make them smartphone appropriate. In this session we invite survey practitioners to showcase any work they have undertaken to develop mobile versions of more challenging CAPI questions. Examples could include, but are not limited to, the following:
- Questions that include lengthy text or complex instructions
- Household enumeration questions
- Grid based or looped questions
- Questions that make use of hidden codes
- Event History Calendars
We invite practitioners to present findings on their redesign process, which designs were favorable, which were unfavorable, and the methods used for pre-testing their refined mobile-first questions.

Keywords: Questionnaire, mobile, pre-testing, usability, web, online, mode, mode transition

Implementation of a Computer-Assisted-Web-Interview Mode for the Adult Education Survey in Austria

Mrs Brigitte Salfinger-Pilz (Statistics Austria) - Presenting Author
Mr Florian Leible (Statistics Austria)

Traditionally the data collection for the Adult Education Survey (AES) is based on CAPI Mode (Computer Assisted Personal Interviewing). In 2022, Statistics Austria tested the implementation of CAWI (Computer Assisted Web Interviewing) focusing on a “mobile first” approach for the AES 2022/23 with a pilot survey. In order to add a CAWI mode in the data collection design for the AES the following steps were undertaken: (1) re-wording phase: question wording and adaptions appropriate for CAWI, (2) testing phase I: cognitive interviews, (3) testing phase II: pre-tests (CAWI/CAPI) including workflow testing and (4) finalization phase: adaptions, analysis and finalization of the survey instrument.
The presentation will focus on the re-wording phase and on the testing phase I – cognitive interviews and give detailed insights into the steps taken and the findings obtained. In the first step (re-wording) all survey questions were examined with special attention given to the “mobile-first” approach. If needed questions were reformulated or elements of the questions changed (e.g. instructions, warnings). In the testing phase I a sample of re-worded questions was tested by using cognitive interviews. Based on the results of the cognitive interviews the questions and answer texts etc. were again reworded so that a final questionnaire could be used for the pre-tests (testing phase II). Results show, that simplified questions and auxiliary texts contributed to a better understanding by the respondents.

Dynamic Surveys for Dynamic Life Courses

Dr Sebastian Lang (German Centre for Higher Education Research and Science Studies (DZHW)) - Presenting Author
Ms Andrea Schulze (German Centre for Higher Education Research and Science Studies (DZHW))

Life courses are complex, dynamic, and usually have a lot of changes, interrelations and path dependencies. This is especially the case in the context of transitions, e.g., from secondary to tertiary education or from tertiary education to the labour market. This life history data is necessary for many social science research questions, especially with respect to causality. At the same time, the collection of is usually done retrospectively, highly demanding and therefore prone to error, very costly, as most surveys collecting these data are interviewer-administered, and often uses life history calendars.

Facing all these challenges we aimed to create a survey instrument using a life history calendar that can be used in self-administered web surveys, can capture the same degree of complexity as interviewer administered surveys, reduces complexity for respondents, and uses a responsive design and works on both mobile and non-mobile devices. After implementing a first version of our self-administered life history calendar (see Lang & Carstensen 2022; Carstensen et al. 2022), in this presentation we would like to discuss the further development of our work on this topic with a special focus on further requirements: Beyond the initial challenges, our instrument had to become truly dynamic, enable respondents to create, adapt and remove episode as they wish, improve the responsibility and store data directly as spell data.

We will present comparative results between the first version of our self-administered life history calendar and the new, revised version on breakoffs, response time, missing data, and data quality.

Automated Split Questionnaire Design: The Way Forward in Survey Research?

Dr Daniel Weitzel (Colorado State University)
Dr Sebastian Tschiatschek (University of Vienna)
Mr Simon Rittel (University of Vienna)
Dr Katharina Pfaff (University of Vienna) - Presenting Author
Professor Sylvia Kritzinger (University of Vienna)

Ever decreasing response rates in face-to-face, paper, or telephone surveys as well as the relatively low costs of online surveys have led to a widespread adoption of the online survey mode. This new mode, however, faces its own challenges, particularly when working with offline-recruited samples. One difficulty researchers face is obtaining representative samples, another challenge is the mode-specific need for shorter surveys. To address these challenges, we propose and evaluate a split questionnaire design to overcome the survey length constraints, extending a proposal of Axenfeld et al. (2022) . We introduce a machine learning approach that can automatically generate n split questionnaire designs that maximize across-split imputability of survey responses. Based on existing surveys and corresponding responses, our approach generates survey design suggestions by allocating questions into n different questionnaire splits. Questions are assigned to each split such that across-split information gain for omitted responses is maximized. Our approach thereby increases the amount of information researchers can obtain from the same sample size through an automated process that learns from previous surveys. This allows us to conduct shorter questionnaires which may increase the quality of responses in a saturated “market of respondents”. Eventually, we also test to which extent we obtain representative samples by applying this approach.

Grid Questions in PC and Smartphone Surveys: Alternative Layouts and Implications for Data Quality and Survey Estimates

Dr Gregor Čehovin (University of Ljubljana, Faculty of Social Sciences) - Presenting Author
Professor Vasja Vehovar (University of Ljubljana, Faculty of Social Sciences)

Grid questions use a table layout for a series of items with the same introduction and identical response categories. Because of their complexity, grids already presented some challenges when designing surveys for PCs, and these concerns have been amplified with surveys on smartphones. The problems associated with the smaller screens of smartphones can be addressed to some extent with web survey software that automatically adjusts font size, graphic elements, and spacing between elements. On the one hand, studies have suggested decomposing grids into item-by-item layouts in which response categories are repeated for each item. On the other hand, some studies have argued that this is unnecessary because the layout space saved is lost when the grid is decomposed. To address this issue, an experimental web survey (n = 4,644) was conducted in Slovenia that assessed grids and four item-by-item alternatives (i.e., scrolling, unfolding, paging and horizontal scrolling) using 10 response quality indicators and 20 survey estimates. The results suggest that grids can be safely replaced by unfolding or scrolling item-by-item layout on both PCs and smartphones. Conversely, the results show that not decomposing the grids has several drawbacks in terms of response quality and differences in survey estimates. A more complex issue here is the precise choice of an item-by-item alternative. While this also depends on the circumstances of the study and the researcher's preferences, the specific choice between paging and unfolding (or scrolling) item-by-item layout involves a complex tradeoff between response quality and device effects. To address this issue, a follow-up experiment with a web survey was conducted to further investigate the role of paging design from the perspective of response quality and survey estimates.