ESRA logo

ESRA 2019 glance program


Passive Smartphone Data Collection and Additional Tasks in Mobile Web Surveys: Willingness, Non-Participation, Consent, and Ethics 2

Session Organisers Dr Bella Struminskaya (Utrecht University)
Professor Florian Keusch (University of Mannheim)
TimeTuesday 16th July, 14:00 - 15:30
Room D17

Smartphones allow researchers to collect data through sensors such as GPS and accelerometers to study movement and passively collect data such as browsing history and app usage in addition to self-reports.

Passive mobile data collection allows researchers to broaden the scope of the research questions that can be answered and can potentially decrease measurement errors and reduce respondent burden. However, there are issues of selectivity, representativeness, and ethics of collecting such data. Respondents have to be willing to provide access to sensor data or perform additional tasks such as downloading apps, taking pictures, and providing access to their smartphone's data. If respondents who are willing to engage in these tasks differ from nonwilling smartphone users, results based on passively collected data might be biased. Moreover, little to date is known about how to ask for informed consent in passive data collection studies. In this session, we aim to bring together empirical evidence on the state-of-the-art use of sensor measurement and other additional tasks on smartphones in combination with mobile web surveys. We welcome presentations of results from (large-scale) studies with diverse sensors and tasks from multiple countries and research settings. Possible topics can include:

- current practice in collecting passive / smartphone sensor data

- implementation, wording, and placement of consent questions

- willingness to allow smartphone sensor measurements

- willingness to perform additional tasks using smartphones

- nonparticipation bias in studies using smartphone sensor measurements

- ethical issues and privacy concerns when collecting passive data

Keywords: smartphone data collection, passive measurement, smartphone sensors, emerging methods

Social Desirability in Passive Mobile Data Collection

Professor Florian Keusch (University of Mannheim) - Presenting Author
Mr Georg-Christoph Haas (University of Mannheim, IAB)
Dr Ruben Bach (University of Mannheim)
Professor Frauke Kreuter (University of Mannheim, IAB, University of Maryland)

Smartphone sensors and research apps allow researchers to passively collect behaviors and other objective phenomena from users on and with their devices. This includes, among others, app usage, Internet browsing behavior, phone call and text logs, location, movements, and activity, such as standing still, walking, or being in a vehicle. Compared to surveys that rely on self-report of behavior, passive measures should suffer to a lesser degree from psychological processes that lead to misreporting of events and behaviors. For example, forgetting, a common source for underreporting in surveys should be completely eliminated with passive mobile data collection because the behavior is recorded as it happens. Social desirability should also be reduced compared to self-reports because of the non-reactivity of the measurement process. However, so far little empirical evidence exists that participants actually turn off the phone to preempt measurement of specific non-normative behaviors (e.g., going to a casino or liquor store, browsing for erotic content online) and to increase privacy.
In this presentation, we analyze data from two research apps that passively collected app usage and Internet browsing behavior (both apps) as well as location and movements (one app) from German smartphone users over several months. One sample are members of a non-probability online panel and the other stems from a probability-based longitudinal study. We will present patterns of detected sensitive behavior such as online betting and adult content consumption to show whether the amount of sensitive behavior measured is a function of time in the study. New participants who recently downloaded the app might be more careful with what they do on their phone than participants who have been in the study for a longer period. We will also analyze instances and patterns of turning off the app to potentially hide specific behaviors.


Measurement Quality in Mobile Sensor Data

Dr Sebastian Bähr (Institute for Employment Research) - Presenting Author

Download presentation

Mobile devices and especially smartphones have developed into an innovative data source for social scientists. With increasing prevalence of smartphones, asking questions on mobile devices is an immediate, timely and cost-efficient way of surveying populations. However, the use of smartphones for social research is not limited to acting as a survey platform. Instead, the user's interaction with the device and its built-in sensors generate novel and useful data.
Up to now, we know only little about this new type of data. Which apps are used, when and how often? What is the size and structure of the contact address book on the smartphone? With which of these contacts does the smartphones user actually communicate? Are there regularities in daily movement? What activities, like mode of transportation, can we infer from the built-in sensors?
This incomplete list highlights the great potential of these new data sources. Data on app usage, mobile communication networks, location and activity are measured at high frequencies – often milliseconds. Over the course of a few days, this accumulates to large amounts of data. Preparing these novel data types necessitates the combination of knowledge from existing domains, like geo-computation and network analysis with mobile phone knowledge. Ensuring the quality of these new data sources provides new challenges.
The IAB-Smart study combines surveys on the smartphone with comprehensive passive data collection for more than 600 persons over 180 days, from a probability sample of participants of a large German panel study. This unique data set is ideal for assessing the quality of passive mobile measurements. We present results on completeness of data and quality changes over time as well as plausibility checks using multiple passive data sources, survey questions posed on the device itself, and the linked panel survey data.


Assessing and Addressing Missingness Mechanisms in Passively-Recorded Location Data

Ms Danielle McCool (Utrecht University) - Presenting Author
Ms Katalin Roth (Utrecht University)
Mr Barry Schouten (Statistics Netherlands)
Mr Peter Lugtig (Utrecht University)

Download presentation

Passive data collection has been proposed as a solution to address important existing problems in traditional survey research, including a lower cost and lower investment from the researchers. However, the increased capacity to reach respondents combined with reduced oversight from researchers, both introduces new forms of missing data as well as increases the rate of non-ignorable missingness in the resulting data.
In a joint collaboration, Utrecht University and Statistics Netherlands developed an app to record near-continuous passive location data on both iOS and Android devices. These data are grouped into longitudinal sets of coordinates marking stops and tracks, which are then displayed to the user as a timeline. Subsequently, the user is requested to enter additional information on their travel behaviors.
As the app introduces many new steps into the data collection process and because both the app and the device on which it runs are themselves potential points of failure, the resulting data must be handled differently than the survey data it hopes to replace.
In November 2018, the app was made available to 1900 users in a field experiment set to run for a rolling period of seven days. Based on our analysis of this data in combination with simulations analyzing induced missingness, we aim to 1) categorize mechanisms that can produce missingness (e.g. a crashed app, temporary loss of satellite signal, dead battery, manual app closure), 2) provide methods to distinguish between these mechanisms and 3) suggest potential avenues for addressing the missingness with the ultimate goal of minimizing bias in planned analyses.


Systematic Bias in Adolescents’ Self-Report on Smartphone Use: Comparison of Three Different Assessment Modes Evaluate the Quality of Survey Data

Ms Laura Marciano (Università della Svizzera italiana) - Presenting Author
Ms Teena Thomas (University of Saskatchewan)
Professor Nathaniel Osgood (University of Saskatchewan)
Dr Anne-Linda Camerini (Università della Svizzera italiana)

The rise of mobile media has sparked numerous studies on adolescents’ media use and its association with different developmental and well-being outcomes. The large majority of studies relies on cross-sectional self-report surveys, which are subject to several biases including estimation bias, recall bias, and social desirability bias. Technological innovations in the measurement of mobile media use overcome most of these biases either by implementing ecological momentary assessments (EMAs) or by passively recording mobile media use via a tracking application installed on the device.
In the present study, we aim to evaluate the validity of self-report survey data by comparing three different data sources of smartphone screen time (duration) and application use. To do so, we draw on data from 93 middle-school students (55% female; Mage = 13.6, SDage = 0.52) in Ticino. Students received an annual self-administered paper-and-pencil questionnaire, asking them about the duration of smartphone use and social networking during a typical weekday and weekend day. They, furthermore, received 11 EMAs spread over a period of 45 consecutive days on their smartphone, assessing overall use and specific app use on a given day. Eventually, we obtained sensor data by passively recording smartphone screen time and app use for the same 45-days period.
In our presentation, we will first discuss the advantages and disadvantages of each data source both on a conceptual and methodological level. Second, we will statistically compare data on smartphone screen time and app use to identify potentially faulty estimations and evaluate the validity of self-report. Finally, we will calculate a difference index for overall screen time and app use assessed via EMAs (respectively traced automatically) and regress this index on gender, trait social desirability, smartphone addiction, and social network addiction to identify significant predictors of a potential self-report bias in mobile media use among adolescents.