ESRA logo

ESRA 2021 full program

Friday 2 July Friday 9 July Friday 16 July Friday 23 July

Short courses on Thursdays



Smart phones

Session Organisers Mr Goran
Peter Lugtig
TimeFriday 9 July, 13:15 - 14:45

Developing an electronic device for use on the ESS in absence of an interviewer by people without access to the internet

Ms Joanna d'Ardenne (NatCen Social Research)
Ms Debbie Collins (NatCen Social Research) - Presenting Author
Dr May Doušak (University of Ljubljana)
Professor Rory Fitzgerald (ESS ERIC)
Mr Maurice Martens (CentERdata)
Dr Roxanne van Giesen (CentERdata)

DRAFT
The European Social Survey (ESS) is a biennial, large scale, cross-national general social survey that has been conducted using in-home face-to-face interviews with a representative sample of the general population since 2002. Since the COVID-19 pandemic struck, face-to-face interviewing has been paused in almost all ESS countries. Whilst it is hoped that fieldwork might restart in 2021 there remains considerable uncertainty if face-to-face interviewing will be possible in all countries in the short to medium term. In response to this the ESS has commissioned the development of an ESS Electronic Questionnaire Device to allow respondents to complete the survey in a self-completion mode, without an interviewer present and without an internet connection. The device will be a tablet tailored so that the hour-long ESS questionnaire can be completed in digital format. It is being designed for simple touch screen, off-line completion in order that those without any experience of using the internet or even a computer can complete the survey. Survey agency staff would leave the device with households after completing screening at the doorstep reducing contact between interviewers and households to a minimum. The device might be used in parallel with on-line data collection where target respondents are able and willing to use their own device. Although the pandemic was the inspiration for this tool it is expected to have longer-term utility as a complementary mode in a move towards web surveys in comparative research.
This paper will outline the methodological, practical and ethical challenges considered in the development of the device, the steps taken to try to address these and initial evidence of the success of steps. The authors will reflect on this evidence and consider the utility of such a device as part of a longer-term mixed mode future for the ESS.


Collecting Screen Time Data in the 1970 British Cohort Study: a Pilot Study

Dr Erica Wong (Centre for Longitudinal Studies, UCL) - Presenting Author
Mr Matt Brown (Centre for Longitudinal Studies, UCL)

As smartphones have become nearly ubiquitous, how people are using these devices and the impact that type of use and frequency of use can have on people’s lives are of increasing interest. However, self-reports of smartphone use tend to be highly inaccurate. As smartphones already collect this data, could surveys leverage this functionality, and more importantly, is this information participants would be willing to share?

We tested the feasibility of directly collecting smartphone use data from participants in a pilot (n=116) of the Age 50 sweep of the 1970 British Cohort Study, a longitudinal birth cohort study of people born in England, Scotland and Wales in a single week of 1970. During face-to-face interviews, we asked participants to download a free app or to access their phone’s in-built screen time tracker and report to the interviewer how much time was spent on their phones in the past week, which three apps they used the most, and how much time was spent on each. Use of an in-built feature and a freely available app is significantly cheaper than developing bespoke apps, and placing the request for this information in a face-to-face interview has the potential to significantly boost participation rates over remote invitations. As far as we know, no other large-scale survey has trialled a similar approach.

Our paper describes how the project was administered and evaluates its success by examining participation rates, the quality of the data collected, and feedback from respondents. We also examine associations between the objective measures of smartphone usage and some self-reported measures of social media use. We discuss the challenges faced in developing this approach and conclude by exploring the possibilities for this kind of data collection in the future.


Can a smartphone app be used to survey the general population: Comparing an app- and a browser-based web survey design

Ms Caroline Roberts (Faculty of Social and Political Science, LINES, University of Lausanne)
Ms Jessica Herzing (Faculty of Social and Political Science, ICER, University of Bern) - Presenting Author
Mr Daniel Gatica-Perez (EPFL Lausanne and Idiap)

Download presentation

To date empirical studies exploring public willingness to participate in surveys using apps have revealed significant challenges around gaining cooperation. To the extent that reluctance to participate varies systematically across subgroups, there is a risk that smartphone app-based surveys will fail to accurately represent populations of interest, limiting their potential as a viable alternative to browser-based web surveys and the possibility to capitalise on the opportunity to combine respondents’ answers with other data sources. Given the potential risk to the precision and accuracy of estimates presented by low response rates, it is paramount that the impact of nonresponse error on app-based surveys of the general population be evaluated.

In this paper, we report on the outcome of a three-wave methodological study comparing a smartphone app (using a push-to-app protocol with a browser-based alternative as follow-up) with a conventional browser-based web recruitment of Swiss citizens. The study was conducted alongside the 2019 Swiss Electoral Studies (Selects) in the months preceding and immediately following the October federal elections. The study was designed to address the following research questions: 1) How well does a ‘push-to-app’ recruitment protocol compare with a standard browser-based web survey protocol with respect to response and completion rates at recruitment, and attrition across panel waves? 2) To what extent do the different recruitment protocols (push-to-app and browser-based web) achieve representation regarding key socio-demographic variables? 3) Do the two starting modes differentially attract different subgroups, and if so, does completing an app-based design with a browser-based alternative reduce initial selection errors, rendering sample composition similar to that of a web browser-only design?

The study was conducted with a probability-based sample of 2,081 Swiss citizens, drawn from a sampling frame based on population registers, of which half was randomly assigned at wave 1 to the app and the other half to the web browser design. At waves 2 and 3, respondents of the browser-based survey were invited to switch to the app, though the browser alternative remained available. Using administrative data from the sampling frame, we assess the socio-demographic composition of the sample across different stages of fieldwork in the different survey designs, using multiple indicators of nonresponse error and the risk of bias. Results indicate that participation rates were lower in the app-based group than in the web browser group, but that across panel waves, attrition rates were lower for those assigned to the app group at wave 1. The push-to-app group underrepresented the youngest and oldest age groups, but differences between designs were relatively small in other socio-demographic variables at the recruitment stage. The findings suggest that, for a panel study, directly recruiting to the app may be more beneficial than switching to an app after initial recruitment by browser-based web survey, as attrition rates were lower for the app group.


What panel surveys and smartphone-app studies can learn from eachother

Mr Peter Lugtig (Utrecht University) - Presenting Author

Download presentation

Several studies have explored using smartphone-apps as a way to collect social and behavorial data. Some notable succesful studies have in recent years been conducted in the context of travel behavior, time use, household consumption, or 'in-the-moment' attitudinal measures about a variety of subjects.
One characteristic of smartphone studies is that they follow participants over a period of time to measure (short-term) changes. This is a goal that they have in common with panel studies, that typically re-interview the same respondents every year or every few months. Apart from overlap in the goal in these type of studies, smartphone-app studies and panel studies have similar problems in terms of dropout, and panel maintenance.
There are however many differences between panel studies and smartphone studies, that partly stem from the primary device that is being used to collect data. Panel studies have either been designed for face-to-face completion (often re-interviewing respondents every year), or for PC/laptop completion (often re-interviewing respondents monthly or bi-monthly. Smartphone studies usually reinterviewe respondents daily, or even take continuous measurements, but then last perhaps a few weeks or months in total.
In this presentation I will discuss existing design differences between panel studies and smartphone-app studies using examples from several existing studies. I will discuss the relative strengths of smartphone-apps and panel studies, and their relative weaknesses in the context of the Total Survey Error framework, and will discuss with an overview of what each type of study can learn from eachother. To conclude I will outline some possible designs for 'hybrid' studies that use the smartphone to study change over longer periods of time, while including 'measurement bursts' that can be employed to better study life events or short-term change.