ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Thursday 20th July, 14:00 - 15:30 Room: Q4 ANF2


Direction of Response Scales

Chair Dr Ting Yan (Westat )
Coordinator 1Dr Florian Keusch (University of Mannheim)

Session Details

The measurement of many constructs in social and marketing research, such as attitudes, opinions, behaviors, personality traits, and personal states, heavily relies on the use of response scales. Survey literature has demonstrated that many design features of response scales (e.g., number of scale points, numeric and verbal labels, spacing of response options, alignment) affect how survey respondents process the scale and use these features to construct their responses. A response scale could descend from the positive to the negative pole (e.g., “strongly agree” to “strongly disagree”) or the highest to the lowest point (e.g., “all of the time” to “never”). The same scale could also ascend from the negative to the positive pole (e.g., “strongly disagree” to “strongly agree”) or the lowest to the highest point (e.g., “never” to “all of the time”). An important question is then whether or not the direction of a response scale affects survey answers, holding constant the other features of the scale.

This session invites presentations that investigate the influence of scale direction on survey responses. We particularly welcome presentations that analyze the influence of scale direction (1) on data quality, (2) under different modes of data collection, especially emerging modes, such as mobile Web and SMS/texting, (3) considering moderating effects of scale- and question-level characteristics, such as number of scale points, scale alignment, and question content, and (4) in a cross-cultural context.

Paper Details

1. Response-order Effect in Radio Button Rating Questions
Mrs Natalia Maloshonok (National Research University Higher School of Economics)

Previous empirical research found that the visualization of online survey questionnaire influence data quality (see, for instance, (Arnau, Thompson, & Cook, 2001; Christian, Dillman, & Smyth, 2008). However, respondents’ answers can be affected not only by formats of question but also by the ways how the alternatives are presented, and especially the order and visualization of response options (see, for example: Smyth, Dillman, Christian, & Stern, 2006). ). The series of empirical studies of on the effect of response options ordering effect showed contradictory findings. The study is aimed to answer the following questions: (1) does alternatives order affect respondent’s answers in vertical radio button questions? (2) does response-order effect differ for different social and demographic groups?
Experiment was carried out in the pre-course surveys for twenty eight online-courses, which were launched by the National Research University Higher School of Economics (HSE) on Coursera.org and Openedu.ru in 2015-2016. Invitations to the surveys were sent by mailing system. Number of responses ranged between 81 and 3574 for different surveys with response rates (RR2) ranged between 3.4% and 22.3% . The overall sample has 22910 respondents.
In the experiment we manipulated the order of response options and the presense/absense of non-substantive option (Don’t know) in separately presented rating questions. The question concerning the evaluation of the familiarity with the subject area of the course (Prior to taking this course, how familiar were you with the subject area of neuroeconomics?) was presented to respondents in one of four versions: (1) radio buttons with options listed from least to most with non-substantive option; (2) radio buttons with options listed from least to most without non-substantive option; (3) radio buttons with options listed from most to least with non-substantive option; (4) radio buttons with options listed from most to least without non-substantive option. Questions were measured on a five-pointed scale (I am completely new to this subject area; I am mostly new to this subject area; I am somewhat familiar with this subject area; I am very familiar with this subject area; I am an expert in this subject area). In versions (1) and (3) non-substantive options were presented on the bottom.
The findings derived from the experiment showed the significant impact of response options ordering on the answers of respondents to rating questions in radio buttons format with bigger shares of respondents who selected the extreme substantive options when they were presented first in the response options list (primacy effect). Thus, respondents chose the extreme substantive options (“I am completely new to this subject area” and “I am an expert in this subject area”) more frequently when they were presented on the top of response options list in comparison with the bottom position. This finding was observed for both formats of questions: with and without “Don’t know” option. There is negative significant interaction between order and education that affect probability to choose “I am completely new to this subject area”


2. Scale Direction Effects in Agree/Disagree and Item-Specific Questions: A Comparison of Question Formats
Mr Jan Karem Höhne (University of Göttingen)
Professor Dagmar Krebs (University of Gießen)

The influence of the response scale direction on respondent’s response behavior is a well-known phenomenon in social science research. While there are several approaches to explaining such response order effects in survey responding, the survey literature reports mixed empirical evidence. Furthermore, different question formats seem to vary in their susceptibility to these effects. In this study, we investigate the occurrence of response order effects in Agree/Disagree (A/D) questions and Item-Specific (IS) questions and provide initial evidence for the concept of “asking manner” (Höhne, Schlosser, and Krebs, 2017). We conducted an experiment with four groups in which we varied the scale direction (decremental vs. incremental) within A/D and IS questions and additionally asked respondents to evaluate both question formats. The first group (n = 209) received A/D questions with a decremental response scale direction. The second group (n = 202) received IS questions with a decremental response scale direction. The groups three (n = 268) and four (n = 251) received A/D or IS questions with an incremental response order, respectively. The results of our study reveal substantial response order effects within the A/D but not within the IS question format. Furthermore, respondent’s evaluations suggest that IS questions require respondents to perform more considerate responding than A/D questions. Altogether, our findings indicate that IS questions are much more robust against response order effects than A/D questions.


3. The interaction effect of interviewer characteristics and scale direction in attitudinal items
Ms Vilma Agalioti-Sgompou (Institute for Social and Economic Research, University of Essex & Centre for Longitudinal Studies, IoE University College London)

Under the Total Survey Error paradigm, survey as a method that combines different processes is likely to be affected by different design decisions taken at different stages. Using data of the American National Election Study (ANES 2012), the main survey as well as the module that collected data from the interviewers, I examine a combination of effects that may occur due to survey design (scale direction) and survey circumstances (interviewer’s belief on the survey topic). The focus (response variable) is on items that measure respondent’s perception for each of the presidential candidates. The scale direction has been randomised with half of the sample receiving the positive end first and the other half the negative. In this study, interviewer’s characteristics (e.g. partisanship, involvement with politics) are considered an important circumstance of the survey which may affect the respondent. The predictors of the response effect are the scale direction in combination with interviewer-respondent partisanships. I find that the direction of the response scale affects response depending on interviewer’s background and the effect varies between groups. I will conclude with a discussion on the implications of the findings on the design and fielding of a survey that focuses on measuring attitudes.


4. In your opinion, in which direction should the answer categories go?
Professor Florian Keusch (University of Mannheim)
Dr Ting Yan (Westat)

A number of studies have demonstrated the influence of scale direction on responses to attitudinal scale questions in web surveys. The scale direction effect leads to higher endorsements of response options at the start of the scale (i.e., response options on the left hand side in horizontal scales and the upper response options in a vertical scale) regardless of whether the scale is presented from the negative pole/lowest point to the positive pole/highest point or vice versa. While researchers have brought forward different theoretical explanations for the scale direction effect (e.g., satisficing, anchoring and adjustment), little focus has been put on how respondents think about scale direction and whether they have a preference for scales to run in a specific direction.
In a two-wave study in the LISS panel with about 2,700 respondents, we experimentally manipulated scale direction (low-to-high vs. high-to-low), question type (attitudinal vs. behavioral), number of response options (five vs. seven), and type of scale labeling (fully labeled vs. end labeled) on 15 items. The questionnaire was identical in the two waves, but half of the respondents of wave 2 received the items with the other scale direction as in wave 1. After the 15 experimental items in both waves, we asked respondents whether they thought that the scale should run in a specific direction and if so which direction that was. We followed up this question with an open-ended probe about the reason for their preference for a response direction. We analyze the relationship between the scale direction that was presented in the experiment to the respondents and their stated preference for scale direction, and we identify different groups of reasons respondents named for their preference to the open-ended probe (e.g., norms, experience, ease of response). The results of this study add to the growing literature on scale direction effects, and it will help better understand the mechanisms that lead to scale direction effects.


5. Scale Direction Effects for Chinese Respondents
Dr Ting Yan (Westat)
Dr Chan Zhang (Fu Dan University)

Survey literature has demonstrated that the direction of a response scale affects survey responses; empirical evidence indicates that respondents are more likely to choose from scale points that are closer to the start of a response scale regardless of the direction. In other words, respondents are more likely to choose from the agree side of an agree-disagree scale when the scale starts with “strongly agree” than when it starts with “strongly disagree.” However, most of literature on scale direction is conducted on Western respondents. There is no investigation of the role of culture on scale direction effects. Survey literature has shown that respondents from a collectivism culture (such as China) differ from respondents from an individualistic culture (such as United States) in how they process survey questions (and response scales) and how they answer survey questions. Although respondents from a collectivism culture are considered to be more sensitive to contexts than respondents from a Western culture, there is limited evidence suggesting that Chinese respondents are less prone to frequency scales than respondents form Western cultures. This study empirically examines whether scale direction has an impact on Chinese respondents’ answers, and if yes, whether scale direction effects are the same for Chinese respondents as for Western respondents.