ESRA logo

ESRA 2019 glance program


Mixed-Device Online Surveys: A Total Survey Error Perspective 1

Session Organisers Dr Olga Maslovskaya (University of Southampton)
Professor Gabriele Durrant (University of Southampton)
Professor Patrick Sturgis (University of Southampton)
TimeThursday 18th July, 09:00 - 10:30
Room D13

We live in a digital age with widespread use of technologies in everyday life. Technologies change very rapidly and affect all aspects of life, including surveys and their designs. Online data collection is now common in many countries and researchers are adapting online surveys in response to the requirements of mobile devices, especially smartphones. Mobile devices can facilitate collection of new forms of data such as sensor data. It is important to assess different sources of error in mixed-device online surveys.

This session welcomes submissions of papers on different sources of error in online surveys in both cross-sectional and longitudinal contexts. The following topics are of interest:

• Coverage issues
• Data quality issues
• Item nonresponse
• Unit nonresponse
• Breakoff rates
• Completion times
• Response styles
• Mobile device use
• Optimisation of surveys and adaptation of question design for smartphones and data quality,
• Impact of different questions’ designs or presentations on response quality across devices,
• New types of data collection associated with mobile devices such as sensor data and data quality.

We encourage papers from researchers with a variety of backgrounds and across different sectors, including academia, national statistics and research agencies.

This session aims to foster discussion, knowledge exchange and shared learning among researchers and methodologists around issues related to increased use of mobile devices for survey completion. The format of the session will be designed to encourage interaction and discussion between the presenters and audience.

The session is proposed by the National Centre for Research Methods (NCRM) Research Work Package 1 ‘Data Collection for Data Quality’. ‘Data Collection for Data Quality’ is funded by the UK Economic and Social Research Council (ESRC) and led by a team from the University of Southampton. The project investigates amongst other topics mobile device use in mixed-device online surveys.

Keywords: data quality, online survey, total survey error, sensor data, sources of error

Data Quality in Mixed-Mode Mixed-Device General Population UK Social Survey: Evidence from the Understanding Society Wave 8

Dr Olga Maslovskaya (University of Southampton) - Presenting Author
Professor Gabriele Durrant (University of Southampton)
Professor Peter Smith (University of Southampton)

We live in a digital age with high level of use of technologies. Surveys have started adopting technologies including smartphones for data collection. There is a move towards online data collection in the UK, including an ambition to collect 75% of household responses online in the UK 2021 Census. However, more evidence is needed to demonstrate that the online data collection will work in the UK and to understand how to make it work effectively. This paper uses the first available in the UK large scale mixed-mode mixed-device survey Understanding Society Wave 8 where 40% of the sample were assigned to online mode of data collection. It will allow comparison of data quality between face-to-face and online modes of data collection as well as between different devices within the online mode. This analysis is very timely and will fill this gap in knowledge.

We use the main survey of the Understanding Society Wave 8. Descriptive analysis and then linear, logistic or multinomial logistic regressions are used depending on the outcome variables to study data quality indicators associated with different modes first and then with different devices in the online part of the survey. The following data quality indicators will be assessed: break-off rates, item nonresponse, response style indicators, response latencies and consent to data linkage. Comparisons to results from the Understanding Society Innovation Panel and to results from other countries will be drawn.

The findings from this analysis will be instrumental to better understanding of data quality issues associated with mixed-mode and mixed-device surveys more generally and, specifically, in informing best practice for the next UK Census 2021. The results can help improving designs of the surveys and response rates as well as reducing survey costs and efforts.


Assessing the Data Quality of Mixed-Device Online Surveys Using Paradata

Mr Jeldrik Bakker (Statistics Netherlands) - Presenting Author
Professor Barry Schouten (Statistics Netherlands)

Paradata is data about the survey process. For online surveys, paradata could range from merely the start and end time of a respondent up to logging every keystroke and click/tap. The more detailed the information, the more leads it offers to determine the quality of the data. In this study, we work towards creating a generic method to assess the data quality of mixed-device online surveys using question-level paradata. We tested this method for an experiment implemented in the Health Survey of Statistics Netherlands. For this experiment, a two (automated navigation vs. manual navigation) by two (normal sized buttons vs. big buttons) design was used, resulting in four conditions.

Former respondents of Statistics Netherlands were invited to participate in the survey. The sample was stratified according to age (16-29, 30-49, and >50), and the device used in the previous survey (smartphone or tablet). Subsequently, systematic sampling was done to obtain a nationally representative sample. Respondents were randomly assigned to the conditions but were free to choose their preferred device. Due to this sampling strategy we were able to conduct analyses for all devices: desktop (n=360), tablet (n=592), and smartphone (n=536).

During data collection, question-level paradata was collected which resulted in over 1 million observations. In the presentation, we will focus on different data quality indicators within the experimental conditions, and analyze these for all devices used during survey participation: PC, tablet or smartphone. The effects of the difference in button size and the benefits and challenges of using automated navigation will be discussed. If time permits we will also show the general method applied to other surveys of Statistics Netherlands.


Device Effects: Evidence from a Large-Scale Mixed-Device Online Survey of Young People in England

Ms Carli Lessof (University of Southampton) - Presenting Author
Professor Patrick Sturgis (University of Southampton)

An increasing proportion of participants who complete web surveys use smartphones or tablets. Early evidence suggests this does not affect the survey responses but is associated with lower initial response rates, higher break-off rates and longer response times. However, more case studies are required to build knowledge and guide efforts to minimise device effects.
This paper will present findings from the Wellcome Trust Science Education Tracker (SET), a mixed-device online survey of young people in school years 10 to 13 (aged 14-18) attending state-funded schools in England during 2016. SET is based on a random probability sample drawn from the 2014/15 National Pupil Database and Individualised Learner Record, providing information about the young person’s age, school year, free school meals status and attainment levels. Over 4,000 individuals responded (a 50% response rate), of whom 24.8% completed the survey using a smartphone and 11% using a tablet.
The study design and large sample provides an excellent opportunity to disentangle device and selection effects using quasi-experimental methods.
• Using propensity score matching, we show how responses vary by device in terms of (1) levels of missing data (break-out rates, item non-response, consent to data linkage and follow-up); (2) response behaviours (straight-lining, primacy effects, agreement rates and responses to multi-code questions), and (3) miscellaneous factors (number of sessions to complete, survey length, evidence of social desirability bias and amount of device switching).
• Even when matched on standard variables, young people who complete surveys using smartphones and tablets may be different to those who use fixed devices. We examine whether this is evident in SET.
• Since propensity score matching may be problematic we compare findings using simpler methods of matching on demographics.
The study will contribute new evidence to the debate on device effects based on a large, digitally native population.


PC versus Mobile Survey Modes: Are People's Life Evaluations Comparable?

Dr Francesco Sarracino (STATEC) - Presenting Author
Dr Cesare Fabio Antonio Riillo (STATEC)
Dr Malgorzata Mikucka (University of Mannheim)

The literature on mixed mode surveys has longly investigated whether face-to-face, telephone, and online survey modes permit to collect reliable data. Much less is known about the potential bias associated to using different devices to answer online surveys.
We compare subjective well-being measures collected over the web via PC and mobiles to test whether the survey device affects people's evaluations of their well-being. We use unique, nationally representative data from Luxembourg which contains five measures of subjective well-being collected in 2017. The use of multinomial logit with Coarsened Exact Matching indicates that the survey tool affects life satisfaction scores. On a scale from 1 to 5, where higher scores stand for greater satisfaction, respondents using mobile phones are more likely to choose the highest well-being category, and less likely to choose the fourth category. We do not observe any statistical difference for what concerns the remaining three categories. We test the robustness of our findings using three alternative proxies of subjective well-being. Results indicate that survey tools do not induce any statistically significant difference in reported well-being. We discuss the potential consequences of our findings for statistical inference.