Conference Programme 2015

Conference floor plans and map
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     

Wednesday 15th July, 16:00 - 17:30 Room: HT-105

Mobile and mixed-device surveys

Convenor Dr Vera Toepoel (Utrecht University )
Coordinator 1Mrs Marika De Bruijne (CentERdata)
Coordinator 2Mr Arnaud Wijnant (CentERdata)

Session Details

Mobile and mixed-device surveys

Online surveys can nowadays be completed on many devices. The devices range from a traditional desktop computer, to a tablet, smartphone or hybrids of these. There is a clear increase in the use of mobile devices for survey completion. Small displays and alternative input mechanisms impose challenges for survey research. In addition, the mixes of devices poses challenges to survey methodologists, mainly in questionnaire design and measurement error, but also in sampling and nonresponse conversion.

We are seeking presentations that highlight potential opportunities and problems in mobile and mixed-device data collection, compare different approaches to deal with these problems, and/or propose solutions. For example:

- how can mixed-devices help to reduce noncoverage and nonresponse bias?
- can survey respondents be effectively nudged towards using a particular device within a web-survey?
- what possibilities and threats do mixed-device survey open up for measurement?
- how to design Web survey questionnaires effectively for use across mobile or mixed-devices?

Paper Details

1. Questionnaire Design Experiments with PC- and Tablet-based Web Surveys in a Controlled Setting
Mr Alexander Wenz (University of Essex)

Although tablets are increasingly used for web survey completion, it remains unclear whether responses obtained in tablet-based web surveys differ from those collected via PC. A lab experiment is conducted to assess the impact of questionnaire design on data quality in tablet surveys. Using a cross-over design, all participants complete the questionnaire both on PC and tablet. The laboratory setting allows controlling for the context of survey administration and holding technical features constant. A subset of respondents participates in cognitive interviews subsequent to the experiment. The findings of the study will be available in April 2015.

2. Do mobile participants reduce data quality in general population surveys? A replication and extension using data from the GESIS Panel
Dr Bella Struminskaya (GESIS - Leibniz Institute for the Social Sciences)
Mr Kai Weyandt (GESIS - Leibniz Institute for the Social Sciences)
Dr Teresio Poggio (Free University of Bozen-Bolzano)

Studies that investigate the effects of survey completion via mobile devices using the data from probability-based general population surveys are scarce. It is not clear whether their findings would hold in other probability-based panels. We replicate and extend the recent study on the effects of smartphones and tablets on measurement error by Lugtig & Toepoel (2015, forthcoming), which uses the data from a probability-based general population LISS Panel in the Netherlands. In addition to the indicators of measurement error and the effects of switching between devices used in the original study, we focus on respondent and situation characteristics.

3. Effects of Mobile versus PC Web on Survey Response Quality: a Crossover Experiment in a Probability Web Panel
Mr Christopher Antoun (Institute for Social Research, University of Michigan)

The study reported here uses a crossover experiment in a probability-based Web panel to compare data quality in a conventional Web survey (PC Web) to a version of the same survey that was reformatted for small screens and filled out on smartphones (mobile Web). I found that respondents in the mobile Web survey really were more mobile and more engaged with the other people and things around them. Despite this, I found there to be no terrible dangers in mobile administration. Response quality – conscientious responding and disclosure of sensitive information – was equivalent between mobile and PC Web.

4. App vs. Web for Surveys of Smartphone Users
Mrs Kyley Mcgeeney (Pew Research Center)
Miss Ruth Igielnik (Pew Research Center)

Can we move web surveys from a browser to an app on mobile phones? The Pew Research Center randomly assigned 2,011 members of their American Trends Panel to receive a series of short surveys either on a mobile app or on the web. This experiment used signal-contingent experience sampling to survey respondents twice a day for seven days about their smartphone use. A significantly higher proportion of panelists in the web treatment than the app treatment responded to at least one of the 14 survey invitations. Other advantages and disadvantages of the two modes will be discussed.

5. The Role of Automated SMS Text Messaging in Public Opinion Research
Dr Nina Hoe (Institute for Survey Research - Temple University)
Dr Heidi Grunwald (Institute for Survey Research - Temple University)

This paper examines the use of web-based, automated SMS text messages in public opinion research in an attempt to cost-effectively reach more diverse samples of citizens. A sample of 1,000 mobile subscribers was contacted via a “cold text” asking them to participate in a short survey regarding their opinion on a local park (raffle for iPad mini was incentive). Non-respondents received follow-up phone calls to determine their reason(s) for not responding. Findings suggest automated SMS text messages are an effective way to measure public opinion, and cost effective when compared to other methods.