Design of Response Scales in (Mobile) Web Surveys
|Convenor||Mrs Franziska Gebhard (University of Mannheim )|
|Coordinator 1||Professor Vera Toepoel (Utrecht University)|
The world of web panels drastically changed by the introduction of mobile devices capable of accessing web panels. This presentation will be about an experiment to see whether the use of an automated mobile questionnaire interface optimization (which uses a classical web interface for PC respondents and a mobile interface for mobile respondents) affects the data quality and survey satisfaction in a panel. Are these effects only for one or more types of questions or does it hold for the complete questionnaire? When these changes occur, will it be for PC users, mobile users or a combination of them?
Research showed that adjusting the online survey design to a particular device can improve response quality. However the differences in data obtained this way are not well-established yet. Present study compared PC and smartphone survey data collected over several alternative response formats. Data demonstrated the visibility principle in mobile radio-button format, response order effect in PC radio-button condition, non-differentiation in drop-box with positive or negative initial option. Drop-box with neutral initial response was suggested as optimal smartphone format yielding results comparable to a PC survey. These results demonstrate tailored survey design effect on data.
Likert-like discrete numeric scales are used extensively for assessing subjective responses in survey questions, often with verbal cues provided only for the highest and lowest response options. In the case of life satisfaction questions, among others, this should become obselete. Survey methods are often fully computerized and statistical analyses are often able to treat numeric responses as though they had a cardinal meaning.
I present two cases in which the 11-point life satisfaction scale is replaced with one admitting more continuous values, and I evaluate the benefits of the higher resolution for typical analyses performed on SWL data.
We propose a new preference elicitation technique, called trio-wise, as an alternative to the best worst scaling technique. This technique involves respondents selecting a point within a triangle that best describes their ranking for three presented items. The Euclidean distances between the selected point and the three corners provide not only the rankings but also the strength of their rank preferences. Results suggest that both methods retrieve consistent preferences. However, the capability of trio-wise to provide additional information on rank preferences and the feedback from respondents leads us to prefer it over the standard best worst scaling technique.
Graphical rating scales can be implemented in different ways. On slider scales ratings are given by sliding only or by a combination of sliding and clicking. Visual Analogue Scales (VAS) are operated by clicking only. In this study, two implementations of sliders are compared to VAS and HTML radio buttons with 5, 7, or 11 response options. Respondents (N = 4180) used computers, smart phones, or tablets. Item nonresponse was highest with sliders (7.3% and 7.7%) followed by VAS (5.5%) and radio buttons (3.8%). Especially respondents with a low formal education produced missing data with slider scales.