ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 14:00 - 15:30 Room: Q2 AUD2


Adapting online surveys for mobile devices 2

Chair Dr Olga Maslovskaya (University of Southampton )
Coordinator 1Professor Gabriele Durrant (University of Southampton)
Coordinator 2Mr Tim Hanson (Kantar Public)

Session Details

The substantial recent increase in levels and ownership and use of mobile devices (particularly smartphones) has been reflected in a rise in the proportion of respondents completing surveys using these devices. For some large social surveys in the UK, for example, between 10% and 20% of respondents now use a smartphone to complete the questionnaire.

This recent shift poses challenges for survey designers, as they seek to enable respondents to complete on their device of choice without any loss of data quality. Solutions to this challenge are varied, and range from minimal adaptation to major overhaul. The latter may include steps to fully optimise the survey layout and presentation for mobile devices, revisions to questionnaire content (e.g. reduced questionnaire length, shorter questions) or alternative completion formats (e.g. splitting surveys into ‘chunks’ that can be completed over a period of time).

For this session we welcome papers on a range of topics relating to adapting surveys for mobile devices, including the following:

• Attempts to produce ‘mobile optimised’ versions of questionnaires
• New question formats that may be better suited to mobile devices (e.g. more interactive)
• Issues with question formats that are known to be problematic on mobile devices (e.g. grids)
• Experimentation to assess the impact of different survey or question formats
• Analysis of data quality indicators that highlights particular issues relating to mobile devices
• Usability testing conducted on mobile devices to identify common issues

We are interested in examples from a range of different types of online survey, including ad hoc studies, tracking projects, longitudinal studies, online panels and mixed mode surveys that include online components. We encourage papers from researchers with a variety of backgrounds and across different sectors, including academia, national statistics and research agencies.

This session aims to foster discussion, knowledge exchange and shared learning among researchers and methodologists around issues related to increased use of mobile devices for survey completion. The format of the session will be designed to encourage interaction and discussion between the presenters and audience.

Paper Details

1. Data Chunking in a longitudinal probability-based survey
Dr Vera Toepoel (Utrecht University)
Dr Peter Lugtig (Utrecht University)

Mobile phones are replacing key tasks formally done on PC and laptop. Mobile phones are commonly used for short messaging.We should therefore 1) move to mobile or mobile-friendly surveys; 2) find ways to shorten questionnaires (most surveys are too long for mobiles). A way to reduce survey length is data chunking: offering questionnaires in smaller pieces. Data chunking a.k.a. modular survey design is a way to cut-down long survey questionnaires. While data chunking is not new to the survey world (see Johnson, Kelly & Stevens, 2011), there exists no systematic research in how data chunking is related to Total Survey Error and mobile surveys. There are several ways to do this: ‘across respondent’ modularisation, whereby different respondents take each piece, and ‘within respondent’ modularisation, whereby the same respondent is permitted to take pieces of a survey at different times. A longitudinal survey in the probability-based LISS Panel of CentERdata is divided in several experimental groups to investigate data chunking within respondents: panellists who own a mobile phone with Internet connection are randomly assigned to a) normal survey length, b) survey cut in 5 pieces, c) survey cut in 10 pieces. In addition, we experiment with (push) notifications via mail and sms. We investigate the number of complete and incomplete responses and look at indicators for data quality (straightlining, primacy, survey length, satisfaction with the survey etc.). In addition, we look at context effects by investigating factor analyses for different scales in the survey. We benchmark our data with other panel variables, including previous waves of the core questionnaire we used. This paper is highly relevant in times were response rates are falling and survey length should be kept to a minimum.


2. Comparing grids, vertical and horizontal item-by-item formats for PCs and Smartphones
Dr Melanie Revilla (RECSM-Universitat Pompeu Fabra)
Professor Mick Couper (University of Michigan)

A lot of previous research has been done on the comparison between grids and item-by-item format. However, the results of the literature are mixed, and more research is needed in particular when an important proportion of respondents answer through smartphones.
In this study, we implemented an experiment with seven groups, varying the device in which a respondent had to answer (PC or smartphone), the presentation of the questions (grids, item-by-item vertical, item-by-item horizontal) and, in the case of smartphones only, the visibility of the “next” button (always visible – which was the current way used by the fieldwork company – or only visible at the end of the page, after scrolling down).
The data was collected by the Netquest online fieldwork company in Spain, between the 15th September and 3rd of October 2016. A total of 1476 respondents participated in the survey and were randomly assigned to one of the seven experimental groups. The survey included several experiments. In this study, we use three sets of questions, of respectively four, 10 and 10 questions. The first four were about the perceived usefulness of market research for oneself, consumers, firms and society. The next 20 asked about their hypothetical willingness to share different types of data: passive measurement on devices they already use; wearing special devices to passively monitor activity; providing them with measurement devices and then having them self-report the results; the provision of physical specimens or bodily fluids (e.g. saliva); others. The first set (4 questions) used a scale from 1 to 5, plus a “I don’t know” option. The second set (10 questions) used a scale from 1 to 5, plus “not applicable”, whereas the third set (10 questions) used a scale from 0 to 10, plus “not applicable”. This last set besides included an Instructional Manipulation Check.
We compared the groups in terms of different aspects: item missing data, proportion of “don’t know” answers or “not applicable”, distributions of the substantive answers, failures to an Instructional Manipulation Check, non-differentiation/straight-lining, answer changes, completion time, etc. The most striking difference found is for the placement of the “next” button in the smartphone item-by-item conditions: when the button is always visible, item missing data is increased substantially.


3. Slider bars in multi-device Web surveys
Miss Angelica Maineri (Tilburg University/University of Trento)
Mr Ivano Bison (University of Trento)
Mr Ruud Luijkx (Tilburg University)

The rapid pace of the technological developments is challenging survey research. The penetration of Internet-enabled mobile devices has increased tremendously over the last years, and more and more frequently respondents invited to complete a Web survey do so via a mobile device. The analysis of the consequences on data quality of the unintended mobile access to online surveys is, indeed, one of the major challenges for survey methodologists nowadays.
Web surveys enable the implementation of new interactive tools, among which slider bars can be placed, that allow exploiting the potential of the Web to allegedly improve measurement. Starting from the fact that the use of these interactive measurement tools is not unproblematic in Web surveys in general, the consequence of using smartphones on measurement using these tools still has to be explored.
Two online surveys, collected among students of the University of Trento in 2015 and 2016 (comprising respectively around 6300 and 4200 respondents), contained one experiment each on slider bars. The first one allows investigating the effects of using numeric labels on the slider bar; the second one, instead, allows studying the effects of the initial position of the handle. The device used has been detected via the collection of User Agent Strings and, in one survey, also by a question asking about the device employed. The detection of the device used allows investigating whether the features of the slider bars have a different effect on measurement according to the device. Moreover, we are able to investigate the role of screen size, screen orientation and dominant hand.
Preliminary results show that when the handle is placed at the extremes of the slider bar, smartphone users show higher propensity to anchoring, and that there are differences if the movement to be done goes from left to right rather than from right to left. Moreover, the use of numeric labels seems to lead to higher accuracy in responses only on pcs and tablets, while there are no differences in scores on smartphones with or without labels.
This paper aims at shedding light on consequences of unintended mobile access on design issues. The contribution is manifold: first, it adds to the literature by bringing new evidences on consequences of unintended mobile access to Web surveys. Second, it contains unique experimental designs that allows investigating peculiar features of slider bars. Moreover, it enables exploring the effects of screen size, screen orientation and dominant hand, information that are rarely available. Understanding to what extent interactive tools such as slider bars can be fruitfully employed in multi-device surveys without affecting data quality is a key challenge for those who want to exploit the potentialities of Web-based data collection without undermining measurement.


4. Applying Usability Features of Popular Apps to Mobile Surveys: A Content Analysis
Dr Jessica Broome (Jessica Broome Research / University of Michigan)
Dr Christopher Antoun (US Census Bureau)
Mr Randall Evans (Jessica Broome Research)

Prior research on mobile Web surveys has suggested that questionnaires should be adapted (i.e. optimized) for small touchscreens, such as mobile phones and tablets, rather than mimic desktop surveys. One consideration for designing mobile-optimized questionnaires is to, when possible, make use of features that respondents are accustomed to seeing in popular mobile apps and websites. Seeing a feature that they have seen many times before in familiar contexts will likely make a survey seem more familiar and less daunting to them. However, there has been little effort to systematically identify these features.

As a first step towards this goal, we conducted a content analysis of usability features in the ten most visited mobile apps and the ten most visited mobile websites. Some examples of these apps, based on 2015 Nielsen data which ranks the penetration of popular apps, are Facebook, Google Search, and Google maps.

Given that mobile apps and websites often focus on presenting or sharing information, we focus on features designed to collect or gather information that could potentially be used in mobile questionnaires. Some examples of these are features for numeric and text entry, such as box size and labeling conventions; features used when respondents are selecting among options, such as check boxes, swiping, scrolling, and drop down boxes; and the use of rating systems or scales.

By focusing on information-gathering mechanisms that are commonly used in popular mobile apps and websites, our results can inform design decisions for mobile-optimized questionnaires. It is our hope that mobile-optimized surveys using familiar features will be more inviting and easily comprehensible to respondents, eventually enhancing respondent engagement and comfort and reducing both non-response error and measurement error.