ESRA 2019 Draft Programme at a Glance


Passive Smartphone Data Collection and Additional Tasks in Mobile Web Surveys: Willingness, Non-participation, Consent, and Ethics 1

Session Organisers Dr Bella Struminskaya (Utrecht University)
Professor Florian Keusch (University of Mannheim)
TimeTuesday 16th July, 11:00 - 12:30
Room D17

Smartphones allow researchers to collect data through sensors such as GPS and accelerometers to study movement and passively collect data such as browsing history and app usage in addition to self-reports.

Passive mobile data collection allows researchers to broaden the scope of the research questions that can be answered and can potentially decrease measurement errors and reduce respondent burden. However, there are issues of selectivity, representativeness, and ethics of collecting such data. Respondents have to be willing to provide access to sensor data or perform additional tasks such as downloading apps, taking pictures, and providing access to their smartphone's data. If respondents who are willing to engage in these tasks differ from nonwilling smartphone users, results based on passively collected data might be biased. Moreover, little to date is known about how to ask for informed consent in passive data collection studies. In this session, we aim to bring together empirical evidence on the state-of-the-art use of sensor measurement and other additional tasks on smartphones in combination with mobile web surveys. We welcome presentations of results from (large-scale) studies with diverse sensors and tasks from multiple countries and research settings. Possible topics can include:

- current practice in collecting passive / smartphone sensor data

- implementation, wording, and placement of consent questions

- willingness to allow smartphone sensor measurements

- willingness to perform additional tasks using smartphones

- nonparticipation bias in studies using smartphone sensor measurements

- ethical issues and privacy concerns when collecting passive data

Keywords: smartphone data collection, passive measurement, smartphone sensors, emerging methods

Understanding Resistance to Participate in Passive Smartphone Data Collection in Mobile-web Surveys

Professor Caroline Roberts (University of Lausanne) - Presenting Author
Dr Jessica Herzing (University of Lausanne/ FORS)
Professor Daniel Gatica-Perez (EPFL/ IDIAP)

Smartphones present many exciting opportunities for social researchers across a wide range of disciplines. Yet there are numerous challenges associated with gaining the cooperation of research participants, maintaining their privacy, and developing and exploiting technologies that are simultaneously adapted to human behaviour, engaging to use, and which motivate the sharing of information (particularly for research purposes). Early studies investigating willingness to participate in smartphone data collection – and particularly, to consent to passive data collection –demonstrate the scale of these challenges; participation and consent rates are typically low, and vary systematically across subgroups, and conventional monetary incentives may be insufficient on their own to offset resistance. Understanding the nature of resistance to participate in smartphone data collection and developing ways to overcome it are, therefore, key priorities for survey methodology. In this paper, we present results from user experience research aimed at addressing this need. The research used a mixed method approach incorporating focus groups with smartphone users from different age groups (18-35, 36-59, 60 and older), and usability testing of a smartphone data collection app (in the laboratory, as well as in the field). Both methods were directed at investigating barriers to participation in smartphone surveys (including concerns around data privacy, respondent burden, and app functionality), and how they might be overcome through optimising the design of the app, questionnaires and other survey tasks administered via the app, and data collection protocols (including incentive schemes and the contextualisation of contact attempts). We present and discuss preliminary findings and how they will inform future phases of research on data collection by smartphone in Swiss surveys.


The effects of a sequential mixed-mobile design and in-interview onboarding on participation in a mobile app study

Professor Annette Jackle (University of Essex) - Presenting Author
Dr Alexander Wenz (University of Essex)
Dr Jonathan Burton (University of Essex)
Professor Mick P. Couper (University of Michigan)
Professor Thomas Crossley (University of Essex)
Ms Carli Lessof (University of Southampton)

In a previous study using a mobile app to scan shopping receipts, we found that participation was limited by sample members not having devices that were compatible with the app, not being willing to download an app, or not feeling confident about doing such a task. Given these barriers to mobile app data collection, in the present study we experiment with different ways of increasing participation. We use data from an experimental study carried out on the UK Household Longitudinal Study Innovation Panel. All respondents to the 2018 interview (n~2,500) were invited to download an app and use that to report their purchases for one month, by entering amounts and categories of spending. Respondents who did not use the app after several reminders were followed up by offering a browser-based version of the spending diary, as an alternative way of participating in the study.

The spending study included an experiment whereby a random half of the sample were invited to download the app within the annual panel interview. The other half were sent an invitation letter a couple of weeks after their panel interview. In addition, the annual interviews were carried out with mixed modes: a random sub-set of the sample were issued to face-to-face interviewing, the rest were issued to web first with non-respondents follow-up up by face-to-face interviewers.

We use these data to examine the following questions: Does a sequential design, where sample members who do not use the app are offered a browser-based alternative, increase participation? Does the browser-based follow-up bring in different types of people, reducing selectiveness of participants? Does introducing the app task within an interview increase participation, compared to sending an invitation by post? Does the effect of introducing the app task within the interview vary with the mode of interview?


Collecting smartphone sensor measurements from the general population: Willingness and nonconsent bias

Dr Bella Struminskaya (Utrecht University) - Presenting Author
Dr Peter Lugtig (Utrecht University)
Professor Barry Schouten (Statistics Netherlands, Utrecht University)
Dr Vera Toepoel (Utrecht University)
Dr Marieke Haan (University of Groningen)
Mr Ralph Dolmans (Statistics Netherlands)
Ms Vivian Meertens (Statistics Netherlands)
Ms Deirdre Giesen (Statistics Netherlands)
Mr Jeldrik Bakker (Statistics Netherlands)
Dr Annemieke Luiten (Statistics Netherlands)

Collecting data using smartphone sensors can offer researchers detailed information about human behaviour, reduce respondent burden by eliminating survey questions, and improve measurement accuracy by replacing or augmenting self-reports. However, respondents have to be willing and able to collect sensor data using their smartphones. If those not willing/able differ from those that are, the results will be biased. Growing number of studies investigates the willingness to collect sensor data, however, the research on the mechanisms of willingness and actual participation is still scarce.
We study consnet for performing actual in-browser smartphone sensor measurements and nonconsent bias. Specifically, the role of (1) the wording of the consent question, (2) the assurance of confidentiality, and (3) the ability to control the data being collected. We randomly assigned smartphone and tablet users from the general population survey conducted by Statistics Netherlands (N about 1900) to three conditions (gain/loss framing, assurance of confidentiality, and ability to revoke measurements) and asked them to share their GPS location, take photos and videos. Respondents not willing to collect smartphone sensor data, were asked for the reasons of nonwillingness. We make use of the rich administrative data linked to the survey data to answer the following research questions: 1) what are the participation rates for different types of sensors (GPS, camera), 2) what is the optimal way to ask respondents for consent to smartphone sensor measurement, 3) what factors (privacy concerns, smartphone use skills, technical problems) identified by respondents could be addressed to improve the consent rates? 4) what is the extent of the nonconsent bias?
This presentation will advance the understanding of mechanisms of participation in smartphone sensor measurement and quantify potential biases due to nonparticipation, to inform the decisions on targeting specific groups or performing adjustments.


Self-selection bias in research including ecological momentary assessment and digital trace data

Dr Anne-Linda Camerini (Università della Svizzera italiana) - Presenting Author
Ms Laura Marciano (Università della Svizzera italiana)

Technological innovations in survey research, including ecological momentary assessments (EMAs) and trace data, increase ecological validity by enabling data collection in participants’ everyday environment. Furthermore, they reduce recall and estimation bias in participants, often associated with traditional survey designs. At the same time, EMAs and trace data are more ‘invasive’ in nature, and, thus, may result in a selection bias, which jeopardizes the generalizability of research findings.
To identify a potential self-selection bias, we draw on data from 1374 students who took part in the 5th wave in 2018 of an ongoing longitudinal study with middle school students in Italian-speaking Switzerland. Within that cohort of 13-year-olds, we compared three groups of students: 1) students who completed only an annual paper-and-pencil survey at school (n = 1100); 2) students who completed the annual survey but who, despite parental consent, did not take part in an additional study including EMAs and passive data tracking via students’ smartphones (n = 171), and 3) students who completed the annual survey, obtained parental consent, and took part in a 45-day EMA and tracking study on smartphone use and adolescent well-being (n = 93).
One-Way ANOVAs with Tukey’s post-hoc test revealed no significant differences in the sample composition for gender, perceived economic well-being, daily time spent on the Internet, smartphone, and social networking sites, as well as smartphone and social networking addiction in adolescents. However, significant differences occurred when comparing parents’ smartphone (addictive) use, with parents who gave consent to the additional EMAs and passive data tracking study spending significantly more time on an average day with their smartphone and showing significantly higher values of smartphone addiction. These and further comparison results will be discussed in light of a potential self-selection bias.


Who is Willing to Use Happiness Apps? Evidence From a Representative German Sample

Dr David Richter (German Institute for Economic Research (DIW Berlin) ) - Presenting Author
Ms Julia Rohrer (University of Leipzig)

Smartphone apps that monitor happiness have been used for research on the everyday determinants of happiness and life satisfaction. However, it is unclear what kind of people voluntarily participate in such studies and how they might differ from non-participants. In 2015 and 2016, two independent sub-samples (N = 1,129 & N = 1,868) of the Innovation Sample of the Socio-Economic Panel were offered the opportunity to download a smartphone app that combined the day reconstruction method and the experience sampling method to investigate happiness. In 2015, respondents only received feedback about what made them happy or unhappy. In 2016, participants were offered an incentive of 50 Euro plus feedback. Only 7 percent of the sample participated in 2015, but this number increased five-fold when money was offered. We also explored how respondent's economic and individual difference variables (e.g., personality, happiness) affected participation, and how these variables interacted with the incentives.