All time references are in CEST
Wearables, Apps and Sensors for Data Collection 3
|Session Organisers|| Dr Heidi Guyer (RTI International)
Professor Florian Keusch (University of Mannheim School of Social Sciences )
|Time||Thursday 20 July, 16:00 - 17:30|
The recent and ongoing proliferation and development of mobile technology allows researchers to collect objective health and behavioral data at increased intervals, in real time, and may also reduce participant burden. Wearable health devices can measure activity, heart rate, temperature, sleep behaviors and more; apps can be used to track behaviors- such as spending, transportation use or health measures- as well as for ecological momentary assessment; smartphone sensors have been used to capture sound and movement, among others. The COVID-19 pandemic brought about additional uses of apps and sensors to measure population trends on a large range of topics including mobility, access, symptoms, infection, and contagion. Large national studies such as the UK Biobank study and the U.S. based NIH All of Us research program have demonstrated the scalability of integrating wearables in population-based data collection. Other studies, smaller in scope or sample, have developed innovative approaches of integrating apps and sensors in data collection.
However, researchers using these new technologies to collect data face many decisions about which devices to use, how to distribute them, how to process the data, etc. These decisions impact other components of the research design including selection bias and data quality. In this session, we invite presentations demonstrating novel uses of wearables, apps, and sensors for data collection as well as potential barriers or challenges. Presentations may be related to measurement, consent, data storage, data analysis and data collection.
Keywords: data collection, wearables, survey apps, sensors, measurement
Mr Carlos Ochoa (UPF) - Presenting Author
As people’s lives happen more and more on the internet, researchers are increasingly interested in investigating different online events. Surveys are the default for doing so, but online events are short, repetitive and hardly distinguishable, making them prone to memory errors and recall bias as time passes.
In some cases, passively collected data is a valid alternative, for instance through metered panels, i.e., opt-in online panels that have asked their members to install a tracking software on their browsing devices to share their online activities. Metered data are particularly suitable to research online events but are affected by other types of errors, and cannot gather subjective information (e.g., motivations) and relevant objective information (e.g., offline consequences).
These limitations can be overcome by sending a survey to a sample of metered panelists in the moment an event of interest is detected using metered data. This method has the potential to add the missing information that cannot be collected passively, also reducing memory errors that affect conventional surveys.
This presentation reports the results of an experiment comparing an in-the-moment survey triggered by an online job application with a conventional survey asking participants to report on their last job searches.
The fieldwork is expected for early 2023. Both samples (expected N=200) will be drawn from the same panel in Spain. The main goal is to answer three questions: 1) do panelists agree to participate in in-the-moment surveys? 2) how do respondents evaluate that experience? and 3) do in-the-moment surveys provide better-quality/new data compared to conventional surveys?
This is the first time an in-the-moment survey is compared to a conventional survey to research the same online event, with the aim of assessing the applicability and efficacy of researching in the moment, as well as helping researchers to decide which option best suits their research needs.
Professor Barry Schouten (Statistics Netherlands & Utrecht University) - Presenting Author
Dr Anne Elevelt (Statistics Netherlands)
Dr Jonas Klingwort (Statistics Netherlands)
Dr Peter Lugtig (Utrecht University)
One of the key research questions in smart surveys is how actively respondents should be involved in checking and, if needed, adjusting errors in smart measurements. Surveys go smart to reduce respondent burden and/or to improve data quality for non-central survey topics. Active involvement re-introduces burden. Furthermore, it also risks spurious improvement of data quality when respondents are not motivated or unable to perform the validation task. From a respondent perspective, however, control over data and feedback on outcomes may be expected.
This active-passive trade-off is at the core of virtually all smart surveys. In a large-scale study using a smart travel app, respondent motivation and ability to perform various tasks are evaluated. Sample units were randomly allocated to different conditions such as the length of the reporting period and the range of possible respondent actions. Respondent actions were classified based on complexity of the task, the centrality of required knowledge and the need for recall. The paper investigates whether longer reporting periods lead to less actions and/or a decay of actions over time. It also investigates whether the impact depends on the set of actions expected from respondents.
Dr Laurel Fish (University College London) - Presenting Author
Professor Pasco Fearon (University College London)
Dr Marialivia Bernardi (University College London)
Professor Lisa Calderwood (University College London)
Professor Alissa Goodman (University College London)
Dr Sandra Mathers (Oxford University)
Ms Sarah Knibbs (Ipsos)
Ms Kavita Deepchand (Ipsos)
Dr Ipsos Team (Ipsos)
Smartphone apps have the potential to provide a relatively low-cost complimentary solution to improving data depth and richness in large-scale studies. Children of the 2020s (COT20s) is a new longitudinal birth cohort that has successfully implemented an innovative smartphone app (BabySteps) during its first wave of data collection. COT20s was commissioned by the Department for Education and is led by University College London (UCL). Ipsos are carrying out the data collection. There five planned waves of survey data collection (9 months, 2, 3, 4 and 5 years) that will monitor the early life and development of children born in England in the early 2020s. Findings from this study will provide evidence for important policy decisions to better support children and families across England in the early years.
Developed by The University of Iowa and tailored by UCL, BabySteps is both a study engagement tool and a means of data collection for the COT20s. In the app there are many unique engagement features including a ‘Baby Diary’ to record memories of the child’s growth and development, ‘News and Articles’ to stay up to date with the study and the science of child development, as well as ‘Daily Trackers’ to monitor sleep and milestones. In-app data collection involves a set of short monthly research activities that participants can complete to earn monetary rewards. These activities aim to capture rich data about children’s development throughout the inter-wave periods.
The app was introduced to the 8569 primary caregivers that enrolled during the first wave of COT20s when their child was 9 months. 74% participants registered on the app, of which 75% completed their first monthly research activity.
In the presentation we will discuss the app design, consent procedure and barriers, data quality, user profiles, ongoing engagement, and challenges.
Professor Annette Jäckle (University of Essex) - Presenting Author
Dr Jonathan Burton (University of Essex)
Professor Mick Couper (University of Michigan)
We examine protocols for inviting survey respondents to complete data collection tasks using mobile apps. The overall aim is to identify protocols that increase participation rates and reduce non-participation bias. We use data from an app study implemented in the 2022 Innovation Panel survey, a probability household panel in the United Kingdom. All respondents (n=2,593) were invited to install the BodyVolume app and use it to answer profile questions (age, sex, height, weight, activity level) and take two photos of themselves (front and side view). The app converted the photos into outlines of body shape from which it calculated body fat, visceral body fat, waist-hip ratio, and the lengths and circumferences of body parts. We examine the following research questions. (1) Does participation depend on the mode of the survey in which respondent are invited to the app study? (2) Does the type of feedback promised affect participation? (3) Does the type of incentive affect participation? (4) Do any of the experimental treatments reduce non-participation bias? (5) At which point in the process of installing and logging in to the app do we lose respondents? The study included three experiments: the survey mode (web-first vs. CAPI-first mixed modes); additional incentive for the body measurement (£5 conditional on using the app vs. £5 added to the unconditional incentive for the survey interview); promised feedback (body fat vs. visceral body fat vs. no feedback promised). After the invitation to the app study, we asked respondents whether they managed to install and log in to the app. If yes, how they installed it (using the link, QR code or search in app store), if not why not, and if they tried unsuccessfully where in the process they dropped out. Fieldwork completed, data not yet available.
Ms danielle remmerswaal (University Utrecht) - Presenting Author
Dr Peter lugtig (UU)
Dr Bella Struminskaya (UU)
Professor Barry Schouten (Centraal Bureau van de Statistiek (CBS))
For policy decisions regarding (public) transportation, data about travel movements split by travel mode are crucial information. Travel diary studies provide this information. In the field of official statistics, smartphone apps are experimented with to be implemented in survey diary research. Smartphone apps as a mode offers promising features to collect both self-report measures and digital behavioural data via smartphone.
In this study we report on a field test carried out between November 2022 and February 2023 among 1900 individuals from a probability-based sample in the Netherlands. We used a smartphone app to collect travel diary data over a 1-day or 7-days period. The app records travel behaviour using location sensors and compiles a diary which respondents can annotate and enrich.
We answer the question: How does the length of the study influence nonresponse and data quality? For this, we compare response rates and data quality for two groups that were randomly assigned to be invited to participate for one day (group 1) or 7 days (group 2).
We compare data quality for the two study lengths by analyzing the percentages of stops and trips that are labelled in the app by respondents across conditions and over consecutive days of the study.