ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 16:00 - 17:30 Room: Q2 AUD2


Improving Mobile Web Questionnaires

Chair Mr Arnaud Wijnant (CentERdata )

Session Details

As a growing number of respondents choose to complete Web surveys using their smartphones as opposed to laptops and desktops, there is increasing interest among survey researchers about how to design questionnaires for smartphones.

Some features of mobile Web surveys are shared with conventional Web surveys (e.g., both are self-administered, computerized, and interactive) but other features are distinct. As a result, survey researchers have to consider design principles for conventional Web surveys while taking into account the unique features of mobile Web surveys that have implications for questionnaire design. These include the fact that smartphones have small narrow touchscreens that can reduce the size of response options (or partially hide them from view) and that respondents using smartphones can have fragmented attention because of interference from their outside environment. This session invites presentations that explore this relatively new topic of mobile Web questionnaire design.

We especially welcome presentations with a focus on:

1. approaches to optimizing questionnaires for smartphones; 2. the impact of different question types (text boxes, drop boxes, sliders, spinners) on data quality; 3. adapting grids for smartphones; and 4. grouping questions on the page.

Paper Details

1. How Should We Adapt Questionnaires for Smartphones? Evidence from UK Surveys and an Experiment on Grids
Mr Tim Hanson (Kantar Public)

A growing proportion of people are choosing to complete online social surveys using smartphones. As ownership and use of smartphones continues to grow, it is crucial that we adapt survey design to enable people to complete questionnaires on their device of choice, without impacting negatively on the respondent experience or data quality.

This paper draws on evidence from a range of UK social surveys to show how device use impacts on survey experience and behaviour. We present results from usability testing conducted on a number of UK surveys, including Understanding Society (the UK Household Longitudinal Study), where respondents completed surveys on smartphones. This allows us to illustrate some of the challenges associated with completing social surveys on smartphones, and to propose some key design principles to follow when adapting questionnaires for mobile devices.

Previous research – and our own usability testing - has highlighted particular issues with completing grid questions on smartphones (e.g. McClain & Crawford, 2013). The traditional grid format that has been widely used on questionnaires for many years can appear cluttered on a narrow smartphone screen, and this in turn can increase respondent burden, increase the risk of miscoding responses and result in higher drop-out rates.

In this paper we present the results of an experiment comparing traditional grids with three alternatives: item by item scrolling (sometimes known as ‘stacked grids’), item by item paging and dynamic grids. The dynamic grid format is a relatively recent development and presents a more interactive option that is designed for smartphones and other touchscreen devices.

We compare results from the formats over a number of analysis dimensions, including substantive responses, missing response levels, ‘Don’t know’ rates, question timings, flatlining, and respondent assessments. Results are also compared across device types and screen dimensions. The results were broadly similar over the four formats, suggesting that in this case the format did not substantially affect responses. However, there were differences in relation to question timings and missing response levels, with dynamic grids performing positively.

Alongside this experiment we have conducted usability testing with respondents to provide qualitative feedback on ease of use and perceptions of different grid formats. Respondents in our testing found the dynamic grids to be engaging and intuitive, with many expressing a preference for this format.

As social studies increasingly move online we need to ensure that questionnaires are optimised for mobile devices. This paper adds important evidence to these debates as we seek to effectively design and adapt social surveys to support completion by smartphones. We consider one of the key challenges associated with the shift to mobile devices - how to deal with grids. Through our experiment and usability testing we assess the pros and cons of alternative formats, including an interactive dynamic grids approach, and consider implications for adapting existing surveys.


2. Development of standards and guidelines for mobile survey instruments
Dr Lin Wang (U.S. Census Bureau)
Dr Christopher Antoun (U.S. Census Bureau)
Mr Russell Sanders (U.S. Census Bureau)
Ms Elizabeth Nichols (U.S. Census Bureau)
Ms Erica Olmsted Hawala (U.S. Census Bureau)
Mr Brian Falcone (U.S. Census Bureau)
Ms Ivonne Figueroa (U.S. Census Bureau)

Completing a survey on a mobile device can be viewed as a series of information exchanges between the respondent (operator) and the mobile device (machine) - a process known as human-computer interaction. A key concern with mobile survey instruments is how to design their user interfaces that maximize response quality. If not designed appropriately, the user interface may contribute to measurement error by impairing respondents’ perception (e.g., reading survey questions) and action (e.g., making responses). To address these concerns, the U.S. Census Bureau launched a project to develop evidence-based standards and guidelines for optimizing mobile survey instruments’ user interface design. Our goal is to maximize human capacity under the constraints of a survey instrument on a mobile device.
We started the project with characterizing the process of information exchange between the respondent and the mobile survey instrument through constructing an Information Processing Model of Mobile Device Operation (MoDO). The MoDO illustrates the information flow from perceiving visual information displayed on a mobile device screen to the human brain and to the finger that touches a target on the mobile screen. In MoDO, three critical factors have direct implications for survey data collection: a respondent’s vision, fingertip size and finger mobility, and cognitive ability. Informed by MoDO, we also constructed a Mobile Respondent Model (MoRM) that represents a respondent who has the minimum physical and mental capacity to complete a survey on a mobile device. Such capacity includes vision, fingertip size, finger mobility, and cognitive capacity. The rationale for using the MoRM is that, if a person like MoRM can successfully complete a mobile survey that is designed in accordance with the standards and guidelines developed in this project, anyone who has the same or better physical and mental capabilities than the MoRM can perform equivalently or more optimally.
Using MoDO, we further elaborated respondents’ interaction with the mobile survey with a two-tier model. Tier I represents respondents’ basic perceptual and motor capability, e.g., seeing geometry figures (e.g., icon shapes) and touching an icon target on the screen of a mobile device. Based on Tier I, we developed a set of standards (Standards Domain). The Standards Domain covers basic perceptual-level design requirements for touch target size and spacing, text size and spacing, foreground and background luminance and contrast, and color combination.
Tier II concerns how respondents effectively and efficiently read survey questions and make responses, involving such behaviors as distinguishing question instructions from question stems, and formulating responses with various response options. Tier II is built upon Tier I. To be successful on Tier II tasks, one must first be successful on Tier I tasks. We developed the a collection of guidelines (Guidelines Domain) based on Tier II. The Guidelines Domain covers six categories of mobile survey instrument components: question instruction, question stem, response, navigation, support features, and general features.
For each standard and guideline, we collected supporting evidence through literature review or experimentation. Some evidence collection studies will be highlighted.


3. The effect of matrix and single questions on the response behavior when switching to a mobile first design
Ms Katharina Burgdorf (University of Mannheim)
Professor Annelies Blom (University of Mannheim)
Dr Christian Bruch (University of Mannheim)
Mr Melvin John (University of Mannheim)
Professor Florian Keusch (University of Mannheim)

The aim of this paper is to present the effects of matrix and single questions on response behavior for smartphone versus tablet/desktop participants in the German Internet Panel, a probability based online panel. On the one hand, matrix questions may especially gain in importance when comparability of items is of interest and may show some advantages with respect to time savings. But on the other hand, matrix questions may impose an extra cognitive burden and thus lead to undesirable satisficing behavior, such as skipping items or straightlining. These effects may be even more significant when respondents participate via smartphones due to smaller screen size and the need for scrolling. Therefore, implementing a mobile first design may require the switch to smartphone-compatible single questions. This may also have an impact on established measures over time, if respondents on desktops and/or smartphones answer differently compared to matrix question.
In order to investigate the effects of matrix and single questions on response behavior in detail, we conducted experiments in the German Internet Panel between September 2015 and November 2016. We investigate the following research questions: Do respondents satisfice more when presented matrix questions as compared to single questions? Most importantly, are these effects different on desktops/tablets versus smartphones?
First analyses were conducted with logistic regressions and ANOVAs. At the conference, we will also present multilevel models, where the clustering of experiments within respondents and experiments within waves is taken into account.


4. PC and mobile web surveys: grids or item-by item format?
Mrs Aigul Mavletova (Higher School of Economics)
Mr Daniil Lebedev (Higher School of Economics)
Dr Mick Couper (University of Michigan)

While grids or matrix questions became a standard and widely used format in PC web surveys, there is no agreement on the format in mobile web surveys. While some software present the grid questions in the grid format on PC and item-by-item on a smartphone, some software present the grid questions in the grid format across all devices. Though several experiments compared different formats of grid-type questions among mobile web respondents, none of the experiments compared data equivalence across devices and across different formats. We will conduct a two-wave experiment, in which we vary the device respondent uses to complete the survey (a cross-over design with changing the device in the second wave from a mobile phone to PC or from a PC to mobile phone) and the format of question presentation (grid vs. item-by-item format).

The experimental design is expected to be the following:

1 1st wave: Grids on smartphones, 2-nd wave: Grids on PC
2 1st wave: Grids on PC, 2-nd wave: Grids on smartphones
3 1st wave: Item-by-item on smartphones, 2-nd wave: Item-by-item on PC
4 1st wave: Item-by-item on PC, 2-nd wave: Item-by-item on smartphones
5 1st wave: Item-by-item on smartphones, 2-nd wave: Grids on PC
6 1st wave: Grids on PC, 2-nd wave: Item-by-item on smartphones

Data Collection
The experiment will be conducted in Russia using a volunteer online access panel run by Online Market Intelligence (http://www.omirussia.ru/en). We plan to collect data in December 2016.

Results of the experiment will be presented at the Conference.