Conference Programme 2015
Tuesday 14th July Wednesday 15th July Thursday 16th July Friday 17th July
Tuesday 14th July, 11:00 - 12:30 Room: HT-104
Design of Response Scales in (Mobile) Web Surveys
|Convenor||Mrs Franziska Gebhard (University of Mannheim )|
|Coordinator 1||Professor Vera Toepoel (Utrecht University)|
Session DetailsThis session focuses on the latest methodology research on ratings scales for desktop and mobile surveys. The two main topics are the implementation of graphical rating scales in online surveys, and the design of answer formats for online, mixed-device, or mobile surveys.
Web surveys offer a wide range of possibilities to design answer scales that are unique to this mode of data collection. Graphical elements like for example slider scales are sometimes used as replacement for rather conventional HTML elements (e.g., radio buttons or checkboxes). Yet the question remains if the quality of these data is the same, there might also be a difference between measurement effects and respondent preferences.
Furthermore, the use of mobile phones or tablets increases. Thus, researchers face new challenges when designing response scales also for small screen devices or devices that are operated by touch screen. Liquid or responsive questionnaire designs (e.g., grid questions on desktop computers versus one item per page on smartphones) as well as new HTML5 input types try to tackle these problems. However, this actually leads to different question contexts, which could affect question understanding and data quality. For example, rating scales optimized for touch screen devices (e.g., HTML5 slider scales or date/time picker) could lead to different ratings on desktop computers.
We encourage submitting papers with a focus on
– the impact of graphical rating scales on data quality
– the implementation of response scales in mixed-device or mobile Web surveys
– new HTML5 input types (e.g., date/time picker, range, or autocomplete)
Paper Details1. Mobile devices in a web panel: what are the results of adjusting questionnaires for smartphones and tablets
Mr Arnaud Wijnant (Scientific Programmer)
Mrs Marika De Bruijne (Researcher)
The world of web panels drastically changed by the introduction of mobile devices capable of accessing web panels. This presentation will be about an experiment to see whether the use of an automated mobile questionnaire interface optimization (which uses a classical web interface for PC respondents and a mobile interface for mobile respondents) affects the data quality and survey satisfaction in a panel. Are these effects only for one or more types of questions or does it hold for the complete questionnaire? When these changes occur, will it be for PC users, mobile users or a combination of them?
2. The effect of response formats on data quality and comparability across online PC and smartphone surveys.
Miss Valerija Kolbas (University of Essex)
Mr Andrew Cleary (Ipsos-MORI)
Professor Nick Allum (University of Essex)
Research showed that adjusting the online survey design to a particular device can improve response quality. However the differences in data obtained this way are not well-established yet. Present study compared PC and smartphone survey data collected over several alternative response formats. Data demonstrated the visibility principle in mobile radio-button format, response order effect in PC radio-button condition, non-differentiation in drop-box with positive or negative initial option. Drop-box with neutral initial response was suggested as optimal smartphone format yielding results comparable to a PC survey. These results demonstrate tailored survey design effect on data.
3. Moving beyond the discrete response in subjective scale survey questions
Professor Chris Barrington-leigh (McGill University)
Likert-like discrete numeric scales are used extensively for assessing subjective responses in survey questions, often with verbal cues provided only for the highest and lowest response options. In the case of life satisfaction questions, among others, this should become obselete. Survey methods are often fully computerized and statistical analyses are often able to treat numeric responses as though they had a cardinal meaning.
I present two cases in which the 11-point life satisfaction scale is replaced with one admitting more continuous values, and I evaluate the benefits of the higher resolution for typical analyses performed on SWL data.
4. A new preference elicitation method, "trio-wise", and its comparison to best-worst scaling
Dr Seda Erdem (University of Stirling)
Dr Danny Campbell (University of Stirling)
We propose a new preference elicitation technique, called trio-wise, as an alternative to the best worst scaling technique. This technique involves respondents selecting a point within a triangle that best describes their ranking for three presented items. The Euclidean distances between the selected point and the three corners provide not only the rankings but also the strength of their rank preferences. Results suggest that both methods retrieve consistent preferences. However, the capability of trio-wise to provide additional information on rank preferences and the feedback from respondents leads us to prefer it over the standard best worst scaling technique.
5. Higher Item Nonresponse Rates Caused by Slider Scales in Web Surveys
Dr Frederik Funke ((1) datamethods.net (2) LINK Institut)
Professor Vera Toepoel (Utrecht University)
Graphical rating scales can be implemented in different ways. On slider scales ratings are given by sliding only or by a combination of sliding and clicking. Visual Analogue Scales (VAS) are operated by clicking only. In this study, two implementations of sliders are compared to VAS and HTML radio buttons with 5, 7, or 11 response options. Respondents (N = 4180) used computers, smart phones, or tablets. Item nonresponse was highest with sliders (7.3% and 7.7%) followed by VAS (5.5%) and radio buttons (3.8%). Especially respondents with a low formal education produced missing data with slider scales.