ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Wednesday 17th July 2013, 14:00 - 15:30, Room: No. 12

Construction of Response Scales in Questionnaires 2

Convenor Dr Natalja Menold (GESIS)
Coordinator 1Mrs Kathrin Bogner (GESIS)

Session Details

Researchers are invited to submit papers dealing with the design of response scales for questions/items to measure opinions or behaviour in surveys. The papers could include questions about several design aspects of response scales such as number of categories, middle category, unipolar or bipolar scales, numerical and/or verbal labels, ascending or descending order of categories or the scale's visual design. However, of interest are the effects of several design aspects on respondents' responses as well as on data reliability and validity. In addition, effects of cognitional or motivational factors could be focus of the studies. Also, specifics in design of response scales in different survey modes, their comparability in mixed mode surveys as well as their intercultural comparability are further topics of interest.


Paper Details

1. Are Branching Questions Always Better than Rating Scales for Measuring Policy Preferences? Effects of Survey Question Format on Respondent Satisficing and Attitude Strength

Dr Alexander Glantz (Ipsos Public Affairs)
Mr Jan Eric Blumenstiel (University of Mannheim)

The prevailing conventional wisdom in survey research holds that most citizens do not possess tightly organized and stable policy preferences. However, according to satisficing theory, low attitude strength might partly result from measurement error since policy attitudes are commonly measured with complex rating scales which places too much burden on respondents' cognitive ability. Previous studies have shown that decomposing rating scales into branching questions can reduce task difficulty for respondents and increases over-time consistency of policy attitudes in surveys. So far, research has paid less attention to the question of whether branching questions reduce the risk of respondent satisficing and whether the superiority of branching questions is conditional on respondent characteristics and survey mode. In the present paper we compare the effects of rating scales and branching questions on response style behavior, inter-attitude consistency and attitude-intention relationships. We first hypothesize that policy attitudes of politically involved citizens are less affected by survey questions format. Second, we expect that the extent of satisficing in rating scales vary with survey mode and should be highest in orally presented surveys. To test these hypotheses, we use data from two experiments, one conducted in an online survey and the other one conducted in a mixed-mode panel survey.


2. Measuring political placement on a left-right scale: radio buttons, sliders, midpoints and question order effects

Professor Annelies Blom (Mannheim University)
Ms Franziska Gebhard (Mannheim University)
Professor Thomas Gschwend (Mannheim University)

Franziska Gebhard (1), Annelies Blom (1), Thomas Gschwend (1), Frederik Funke (1, 2)
(1) University of Mannheim, Germany
(2) http://research.frederikfunke.net

When measuring individuals' self-placement and the placement of political parties on a left-right scale, many surveys use an 11-point scale with an additional don't know option. This scale, as well as the associated question wording, is now quite standard in aural surveys (face-to-face and telephone) in Germany. In the German Internet Panel (GIP), a longitudinal online survey on the political economy of reforms, however, the data collection mode is visual-interactive and self-completion. This raises the question of whether the traditional measurements need to be adjusted when transferring them into the online mode.

In a series of experiments on the GIP we compared the traditional 11-point scale in form of radio-buttons to two versions of a slider scale, one with and one without a mid-point indicator. In addition, we randomized the order of the questions. In one condition individuals were first asked to place themselves on the left-right scale and to place German political parties on this scale; in the counter-condition the order of these questions was reversed.

Our analyses shed light on optimal ways of measuring left-right placement in the online survey mode.


3. Effect of slider scales in Web surveys: Lower data quality and biased sample composition compared to measurement with visual analogue scales or radio button scales

Dr Frederik Funke (University of Mannheim, Germany, Research Center (SFB) 884)

Research Question. Two alternatives for standard rating scales in Web-based research made from HTML radio buttons were examined: slider scales and visual analogue scales (VAS). Because of similarities in appearance - both scales consist of a horizontal line between two (e.g., verbal) anchors -, sliders are often confused with VAS. However, regarding handling sliders are quite demanding: respondents have to (1) move the mouse to the slider, (2) click a handle, (3) hold the mouse button, (4) slide the handle to response option, and (5) release the mouse button. In contrast, giving ratings with VASs or radio buttons is quite simple: (1) point and (2) click.

Study. Data were collected in a 3 (Factor 1 rating scale: slider, VAS, or radio button) X 3 (Factor 2 number of response options: 3, 5, or 7) between-subjects experiment (N = 1067). Additionally, a continuous VAS with 200 gradations was tested.

Results. Analyses provide clear evidence that slider scales may lower data quality: break-off was about three times higher, resulting in a biased sample composition. Furthermore, a large share of respondents uses n-point sliders actually as n-1-point scales. Additionally, response times were considerably higher. No negative effects were observed with VAS, which may be used as an alternative for standard rating scales. Overall, the recommendation is to use standard (low-tech) radio buttons for discrete measurement or to benefit from continuous VAS whenever small differences should be detected.


4. Response Scales in CATI Surveys. Interviewers' Experience

Mr Wojciech Jablonski (University of Lodz)

Telephone interviewers are much more constrained than face-to-face interviewers as far as using visual aids is concerned. Lack of visual materials is usually compensated by the implementation of special techniques while designing the interview script. Such techniques include, for instance, using semantic scales instead of numeric ones, unfolding questions based on semantic scales, using fewer answer categories and shortening the answer categories, as well as formulating pre-categorised questions.

Within the presentation, we will outline the results of the study what was carried out in 2009 and 2010 among 12 major Polish commercial survey organizations. The research was based on a standardized self-administered questionnaire for CATI interviewers, as well as In-Depth Interview with well-experienced interviewers. A total of 846 questionnaires and 32 IDIs were completed.

This presentation investigates the issue of interviewers' opinion and attitudes connected with different response scales formats used by survey organizations. The interviewers were encouraged to elaborate on methodological solutions implemented in CATI scripts, as well as to describe problems they encounter while asking different types of questions.

As we see it, interviewers' opinions are valuable sources of information about the interview process, including the design aspects of response scales and their impact on data reliability. These perspectives should be taken into consideration while designing the CATI scripts and questionnaires.