ESRA logo

ESRA 2021 full program

Friday 2 July Friday 9 July Friday 16 July Friday 23 July

Short courses on Thursdays



Response scales and directions

Session Organiser Ms Almuth Lietz (German Center for Integration and Migration Research (DeZIM) )
TimeFriday 2 July, 15:00 - 16:30

The design of the questionnaire and the response scales can have a decisive impact on the responses gained in a survey. Survey designers can vary, e.g., the scale directions, the survey mode, the question order, the visual design, the response scale format, or the number of response options within a response scale. In this context, this session focuses on the investigation of response scales and directions and how they affect survey responses and data quality. Most studies presented build on an experimental design to answer the following questions:
• Do scale direction effects differ between different survey modes?
• How do effects of question order interact with visual design?
• What impact do different response scale formats have on data quality?
• Can data collection apps reduce response burden and increase data quality?
• How do scale direction effects differ between five- and seven-point rating scales?

Keywords: scale direction, question order, visual layout, response scale formats, data collection apps, number of response options

Data Quality of Different Response Scale Formats

Ms Alexandra Asimov (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author

There are different ways to present response scales in battery questions in self-administered questionnaires. The response scale can affect survey response and thus have an impact on the data quality. Therefore, comparative information about data quality of the different response scale formats is important for the design choice. In mail questionnaires, grid and answer box format are suitable due to the limited space. In the latter, respondents write a code letter, which represent a response option, in the corresponding box. In web questionnaires mainly item-by-item and grid format are used. Also, auto-advance format is increasingly used; here, after answering the item the next item automatically appears in the display. While grid format and item-by-item format were well discussed and compared in the literature, little is known about answer box and auto-advance format, especially in comparison to the other scale formats. This paper compares the data quality of answer box and auto-advance response scale formats in their respective mode to grid format as well as item-by-item format in web mode using the data from a general population mixed-mode study (ALLBUS methods study N=1428). To compare data quality, item nonresponse, nondifferentiation, straightlining and susceptibility to error for the mail mode will be examined. First results show that compared to the grid format, the answer box format, produces less straightlining and item nonresponse. The auto-advance format, compared to the grid and item-by-item format, shows less straightlining and nondifferentiation but more item nonresponse. The implications of this research of response scale formats will be discussed.


Designing apps for diary studies. Lessons learned from respondent feedback.

Ms Deirdre Giesen (Statistics Netherlands) - Presenting Author
Mr Stefan Theunissen (Statistics Netherlands)
Mr Barry Schouten (Statistics Netherlands)

Download presentation

Data collection apps seem very promising for diary studies. Quick access to a diary form in an app is easier for many respondents than with a web or paper form. This easy access may promote frequent entrances during the day, which increases data quality. Additionally, apps can relatively easily facilitate a whole range of sensor measurements (e.g. location, motion measurements, taking pictures) that may reduce response burden and increase data quality.
Statistics Netherlands (in cooperation with various partners in the ESS partners and the Dutch government) is developing apps for three diary studies: a travel app, a household budget survey app and a time use app. The travel app combines traditional survey questions with location measurements. The household budget survey app combines survey questions with data collected with pictures of receipts. The time use survey app combines survey questions in the app with short “pop up” questionnaires that aim at in-the-moment measurements (of e.g. feelings of stress or recent media use). Some versions of the apps also offer respondents insights based on the data provided (e.g. amount of money spent on food or amount of time spent on household tasks).
As part of the app development process, several small-scale pre-tests have been conducted that were a mix of cognitive testing and usability testing. In these tests we gained insights in the design of the communication introducing the app (e.g. materials guiding respondents to and through installing the app) and of the app itself. For the travel app a field test (n=1902) was conducted and an evaluation survey was sent to sample units who had downloaded the app and registered.351 respondents completed this evaluation survey.
In this paper we will summarize the main lessons learned about the design of these types of data collection apps through our qualitative testing. Additionally, we will provide an overview of the results of how respondents evaluated the travel app in the evaluation survey and to what extent their evaluations are related to background characteristics and response behavior.


Measuring Achievement Motivation: Investigating Direction Effects Across Rating Scales with Five and Seven Points in a Probability-based Online Panel

Dr Jan Karem Höhne (University of Duisburg-Essen) - Presenting Author
Professor Dagmar Krebs (University of Giessen)

In social science research, survey questions with rating scales are a commonly used method to measure respondents’ attitudes and opinions. Compared to other rating scale characteristics, scale direction (i.e., decremental and incremental) and its effects on response behavior has been rarely addressed in previous research. In addition, it remains unclear whether and to what extent scale direction effects are associated with the length of rating scales. In order to fill this knowledge gap in the survey literature, we investigate the size of scale direction effects across rating scales with five and seven points by analyzing observed and latent response distributions.
For this purpose, we conducted a survey experiment in the probability-based German Internet Panel (N = 4,676) in July 2019 and randomly assigned respondents to one out of four experimental groups. These four groups are defined by scale direction (i.e., decremental or incremental) and scale length (i.e., five or seven points). All groups received the same five survey questions on achievement motivation with vertically aligned scales. We presented one question per page (single presentation).
The initial results reveal substantial differences between rating scales with five and seven points. Five-point scales seem to be relatively robust against scale direction effects, whereas seven-point scales seem to be relatively prone to scale direction effects. These findings are supported by both the observed and latent response distributions.
Rating scales with a different direction and length vary with respect to their measurement properties. Thus, decisions about scale direction and length should be taken carefully.


Are scale direction effects associated with survey mode? Comparison of a face-to-face, a telephone and an online survey experiment

Mr Adam Stefkovics (assistant lecturer) - Presenting Author

Download presentation

A number of previous studies have shown that the order of response options may affect the distribution of responses. There is also considerable evidence that the cognitive process of answering a survey question differ by survey mode, which suggests that response option order effects may interact with mode effects. The aim of this study was to explore scale direction effect differences between experimental data collected by face-to-face, phone and online interviews. Three different scales were used in the survey. Few signs of scale direction effects were found in the interviewer-administered surveys, while in the online survey, in the case of the 0–10 scale, responses were largely affected by the direction of the scale. The anchoring-and-adjustment heuristic may explain these mode differences and the results suggest that the theory provides a better theoretical ground than satisficing theory in the case of scalar questions.


Capturing the interaction between question order effects and visual layout: results from an online experiment

Mr Adam Stefkovics (assistant lecturer) - Presenting Author
Mr Zoltán Kmetty (assistant professor)
Mrs Júlia Koltai (assistant professor)

Download presentation

Question order effect refers to the phenomenon that previous questions may affect the cognitive response process and respondents’ answers. Previous questions generate a context or frame in which questions are interpreted. At the same time, in self-administered surveys, respondents are required to visually process the questions, therefore visual design may also shift responses. Past empirical research has yielded considerable evidence supporting the influence of question order, but few studies have investigated how question order effects interact with visual design. To address this research gap, the present study uses data from an online survey experiment conducted on a non-probability-based online panel in Hungary, in 2019. We used the welfare related questions of the 8’th wave of ESS. Respondents were asked about the perceived consequences of social benefits and services (E10, dependent variable). We manipulated the questionnaire by changing the position of a question that call forth negative stereotypes on such social benefits and services (E13). One group received E13 in its original place (after E10, control group), one group received E13 just before E10 and another group received E13 before E10, but with one question between the two (E9). We further manipulated the visual design by presenting the questions in one page (grid) or in separate pages. This resulted in a 3X2 design. 1100 respondents were randomly assigned to one of the six groups. We hypothesized that placing E13 right before E10 will shift responses and the effect will be stronger if questions are presented in the same page. The results show that placing E13 right before E10 significantly changed respondents’ attitudes in a negative way, but the effect is significant only when questions are presented on separate pages. A possible interpretation of the results is that such one-page per question design leads to a deeper cognition of the questions.