ESRA logo

ESRA 2019 glance program


Passive Smartphone Data Collection and Additional Tasks in Mobile Web Surveys: Willingness, Non-Participation, Consent, and Ethics 3

Session Organisers Dr Bella Struminskaya (Utrecht University)
Professor Florian Keusch (University of Mannheim)
TimeWednesday 17th July, 09:00 - 10:30
Room D19

Smartphones allow researchers to collect data through sensors such as GPS and accelerometers to study movement and passively collect data such as browsing history and app usage in addition to self-reports.

Passive mobile data collection allows researchers to broaden the scope of the research questions that can be answered and can potentially decrease measurement errors and reduce respondent burden. However, there are issues of selectivity, representativeness, and ethics of collecting such data. Respondents have to be willing to provide access to sensor data or perform additional tasks such as downloading apps, taking pictures, and providing access to their smartphone's data. If respondents who are willing to engage in these tasks differ from nonwilling smartphone users, results based on passively collected data might be biased. Moreover, little to date is known about how to ask for informed consent in passive data collection studies. In this session, we aim to bring together empirical evidence on the state-of-the-art use of sensor measurement and other additional tasks on smartphones in combination with mobile web surveys. We welcome presentations of results from (large-scale) studies with diverse sensors and tasks from multiple countries and research settings. Possible topics can include:

- current practice in collecting passive / smartphone sensor data

- implementation, wording, and placement of consent questions

- willingness to allow smartphone sensor measurements

- willingness to perform additional tasks using smartphones

- nonparticipation bias in studies using smartphone sensor measurements

- ethical issues and privacy concerns when collecting passive data

Keywords: smartphone data collection, passive measurement, smartphone sensors, emerging methods

A Smartphone App to Record Food Purchases and Acquisitions

Dr Ting Yan (Westat) - Presenting Author
Dr Marcelo Simas (Westat)
Ms Janice Machado (Westat)

The proportion of American adults owning a smartphone has almost doubled since 2011; now about two thirds of American adults own a smartphone of some kind and for many of them their smartphone becomes an important tool in their lives (Pew Research Center, 2015). Given this trend, researchers are also increasingly using smartphones for various research purposes such as ecological momentary assessment, transportation and time-use diary studies, health monitoring, and passive mobile data collection (e.g., Link, Lai, and Bristol, 2014; Sonck and Fernee, 2013; Revilla, Ochoa, and Loewe, 2016). This paper describes an innovative use of smartphones to collect food purchases and acquisitions. Food purchases and acquisitions used to be collected through paper diaries. We developed a smartphone app that allows respondents to enter all foods and drinks they’ve obtained on a smartphone. The app takes advantages of smartphone functionalities to reduce reporting burden. For instance, the app tracks respondents’ geolocations passively and uses these locations to cue food acquisition. In addition, we linked up with various databases to reduce the burden of reporting name and address of the place where food is obtained. Furthermore, the app allows respondents to scan barcodes, to take pictures of food items, and to use type-ahead to get food item descriptions. Respondents can also take a picture of their receipts and upload their receipts directly from the phone. In this paper, we will discuss the feasibility of using such a smartphone app to record food purchases and acquisitions in a convenience sample. We will focus on the proportion of respondents using this smartphone app, their demographic characteristics, and their experience with the smartphone APP. Implications of the findings will also be discussed.


The Effects of Feedback on Participation and Reporting in Mobile Data Collection

Dr Alexander Wenz (University of Essex) - Presenting Author
Professor Annette Jäckle (University of Essex)
Professor Mick P. Couper (University of Michigan)
Dr Jonathan Burton (University of Essex)

Mobile technologies are increasingly used for survey data collection, to capture new forms of data or increase the level of accuracy or detail. Surveys that have trialled mobile data collection, however, have generally reported low participation rates, suggesting that many respondents are not willing or able to participate. Studies testing approaches to increase participation, for example using different monetary incentives, have had limited success in boosting participation rates to date.

In this paper, we evaluate the effectiveness of an alternative approach to increase participation in a mobile app study on consumer expenditure: providing feedback to respondents about their reported expenditure. While feedback might be interesting and useful for respondents and might motivate them to keep participating in the study, a danger of providing feedback on previous survey responses is that it can potentially affect the behaviours that researchers intend to measure, resulting in panel conditioning.

Members of an online access panel in the UK (n = 2,866) were invited to record their expenditure for one month using an app diary downloaded on their smartphone or a browser-based online diary. They were randomly assigned to one of three treatment groups: #1 was promised feedback in the survey invitation, #2 was not promised but received feedback, and #3 did not receive feedback. The two feedback groups were able to view an additional section in the app/online diary with a cumulative summary of the expenditure they had reported by category. The study was followed by a debrief questionnaire.

We use these data to examine the following questions:
Does offering respondents feedback on their expenditure increase participation?
Which types of respondents were sensitive to feedback? Which used the feedback, which found it useful?
What effects does feedback have on perceived burden and compliance with the task?
What effect does feedback have on reported expenditure?


Are Smartphones Really the ‘Bad Guys’? Dispelling Smartphone Data Collection Myths: Evidence from a Large Random Probability Online UK Survey Run by ONS.

Mr Andrew Phelps (Office for National Statistics) - Presenting Author
Dr Olga Maslovskaya (University of Southampton)
Professor Gabriele Durrant (University of Southampton)

We live in a digital age with high level of use of technologies. In the UK there is a move towards online survey data collection to deliver the Digital by Default Government strategy, including the ambition to collect 75% of household responses online in the UK 2021 Census. Social surveys have also started embracing smartphones for data collection. However, concerns still exist regarding smartphones producing lower data quality in comparison to PCs/laptops and tablets. It is also not clear whether smartphones can increase online survey response rates among under-represented groups.

This paper is very timely and will fill these knowledge gaps in the UK context. It uses Office for National Statistics (ONS) data from around 20,000 responding households across two experiments from Labour Force Survey (LFS) online experiments conducted in 2017. The main aim of these experiments was to understand the optimal way of maximising initial online take-up to a Labour Market survey. In these experiments the questionnaires were designed to be respondent-centric and optimised to be smartphone first.

This paper aims first to assess sample compositions within the experiment data across different device types and compare them to the main LFS distributions as well as to previous UK Census 2011 population estimates. It then compares specific data quality indicators, such as completion time, timeouts, break-offs, for different devices used by respondents in the survey. Finally, specific demographic profiles among under-represented groups will be assessed. Comparisons to the Understanding Society Wave 8 data will also be drawn.

Preliminary results suggest that sample compositions of the experiment data for all devices used is the same for gender but some discrepancies are observed for age, qualifications and area deprivation index. This paper will assess whether specifically allowing smartphone completion helps or hinders the balancing of sample compositions.


How Long is Too Long? Examining the Effect of Screening Process Cycle Time on Conversion Rates

Mrs Patricija Caharijas (Nielsen)
Mr Jeff Scagnelli (Nielsen)
Mr Karol Szeplewicz (Nielsen) - Presenting Author

The global growth of smartphones and mobile apps have provided an opportunity to recruit from new groups of people and use them for different research purposes. The challenge with these approaches is how to build an efficient recruitment screening process for longitudinal studies that will balance high conversion rates with sample requirements. Experience from the past shows that greater investment during the recruitment stage leads to better quality of collected data and longer engagement in the sample. However, in the mobile environment it is critical to quickly engage respondents in order to maintain interest and engagement. Given this, there are some key considerations when designing a mobile data collection activation process, in order to maximize participation in your study.
This study examines the recruitment success of engaging respondents with a mobile app, where they report their purchases by scanning barcodes. This process was executed in different countries, so the results can be compared in different markets. The focus is on the impact of cycle time on conversion rates, given a more traditional screening approach was utilized. The outcomes provide guidance on key considerations when utilizing this approach and trade-offs that come with them.