ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 11:00 - 12:30 Room: Q2 AUD3


Response Format and Response Behavior 2

Chair Mr Jan Karem Höhne (University of Göttingen )
Coordinator 1Dr Timo Lenzner (GESIS – Leibniz Institute for the Social Sciences)
Coordinator 2Dr Natalja Menold (GESIS – Leibniz Institute for the Social Sciences)

Session Details

Measuring attitudes, opinions, or behaviors of respondents is a very widespread strategy in sociology, political science, economics, and psychology to investigate a variety of individual and social phenomena. A special challenge in measuring such characteristics of respondents is to employ appropriate response formats. Since they can have a profound impact on the cognitive and communicative response processes, and thus, on measurement quality. For this reason essential questions arise with respect to different response formats and their impact on the (cognitive) information processing of respondents.


This session welcomes contributions that are based on empirical studies as well as theoretical considerations dealing with the relationship between response format and response behavior:

With respect to empirical studies we especially invite contributions that are based on experimental designs which investigate, for instance, the influence of the response scale direction. Moreover, we welcome presentations using new (innovative) techniques and approaches as well as replication studies under different conditions such as survey modes and/or device types.

With respect to theoretical considerations we welcome presentations that discuss and reflect the relationship between response format and response behavior in the context of an interdisciplinary perspective. This includes contributions that deal with the merits and limits of experimental study designs, research methods, and statistical procedures.


For this session, we welcome contributions on the following research areas (but not limited to):

- Measuring cognitive effort and response quality associated with response formats,
- Comparisons of visual presentation forms (e.g., arrangement and presentation mode),
- Differences between several types of response formats (e.g., open vs. closed),
- Future perspectives and developments (e.g., gamification strategies),
- Measurement quality (e.g., reliability and validity),
- New methods and techniques (e.g., eye tracking and paradata),
- Replications of empirical studies and findings,
- Response bias (e.g., acquiescence),
- Theoretical considerations on response format and response behavior.

Paper Details

1. Closed-ended versus open-ended behavioural frequency questions: Measuring media exposure and internet usage
Dr Salima Douhou (City, University of London)
Dr Ana Villar (City, University of London)

Previous research has shown that offering respondents closed-ended, fixed response scales when asking about behaviour frequency can influence their response behaviour and bias their answers. The empirical findings of previous studies investigating the presentation of behavioural frequency scales, suggest that respondents use the category range as a frame of reference to assess their own behaviour. This is a particular problem in cross-national surveys, given that evidence has shown that scale effects and respondents’ tendency to gravitate around the midpoint may vary across countries. For this reason, open-ended response options are usually preferred when measuring behaviour frequencies. At the same time, open-ended questions have been shown to suffer from higher item nonresponse and need to be recoded, which can result in further data loss. In particular, item nonresponse may be more likely among respondents who never do the target behaviour, if the “zero” category is not obvious to them or if they are less willing to report zero frequencies than they would be to choose “never” from a fixed response scale. This issue would be especially problematic when measuring behaviours with low prevalence in the population. The goal of this experiment is to evaluate data quality obtained from open-ended vs fixed response scales to measure behavioural frequencies, looking frequencies for media exposure and internet use. In this study, we tested an open-ended version where respondents were asked to report hours and minutes and two closed-ended versions (i.e. respondents were asked to select a response category), which differed in their category range. In contrast to previous studies, we wanted to test whether the open-ended version would be able to estimate the ‘no time at all’ group of respondents in a similar way as when respondents are actively presented with a ‘no time at all’ response option as in the closed-ended versions. In addition, as open-ended items tend to suffer from a higher item nonresponse, we wanted to check whether this is indeed the case for the items in this experiment. For this purpose, we conducted a split ballot experiment in a face-to-face Omnibus survey in the UK with approximately 1,000 respondents. The results show that the choice of response scale (high or low category range) affects response behaviour. This applied especially to the item on media exposure. Open-ended questions, in contrast, seem to be a better alternative when one considers the bias introduced with closed-ended questions as respondents use the category range as a frame of reference. In addition, the results show that seemingly no new biases were introduced with respect to the ‘zero-frequency’ respondents.


2. Getting to the Bottom of Response Behavior when using Forced Answering in Online Surveys
Dr Jean Philippe Décieux (Université du Luxembourg)
Mrs Alexandra Mergener (Federal Institute for Vocational Education and Training)
Mr Philipp Sischka (Université du Luxembourg)
Mrs Kristina Marliese Neufang (University of Trier)

Relevance:
Recent studies have shown that the use of the forced answering (FA) option in online surveys results in reduced data quality. They especially examined that forcing respondents to answer questions in order to proceed through the questionnaire leads to higher dropout rates and lower answer quality. However, no study researched the psychological mechanism behind the correlation of FA on dropout and data quality before. This response behavior has often been interpreted as psychological reactance reaction. So, the Psychological Reactance Theory (PRT) predicts that reactance appears when an individuals’ freedom is threatened and cannot be directly restored. Reactance describes the motivation to restore this loss of freedom. Respondents could experience FA as a loss of freedom, as (s)he is denied the choice to leave a question unanswered. According to PRT, possible reactions in this situation might be to quit survey participation, to fake answers or to show satisficing tendencies.

Research content:
This study explores the psychological mechanism that effects response behavior in FA condition (compared to non-FA- condition). Our major hypothesis is that forcing respondents to answer will cause reactance, which turns into increasing dropout rates, decreasing answer quality and a satisficing behavior.

Methods and Data:
We used online survey-experiments with forced and non-forced answering instructions. Reactance was measured with a four-item reactance scale. To determine answer quality, we used self-report for faking as well as the analysis of answers to open ended questions.

Results:
Zero-order effects showed that FA increased state reactance and questionnaire dropout as well as it reduced answer length in open-ended questions. Mediation analysis supported the hypothesis of reactance as an underlying psychological mechanism behind negative FA effects on data quality.

Added Value:
This is the first study which offers statistical evidence for the often proposed reactance effect influencing response behavior. This offers a base for a deeper psychological reflection of the use of the FA-option.


3. Buttons in Web Surveys: A test of visual languages and placement
Dr Michael Stern (NORC at the University of Chicago)
Dr Ipek Bilgen (NORC at the University of Chicago)
Ms Erin Fordyce (NORC at the University of Chicago)

Research has consistently shown that even minor changes in the visual layout of survey questions can affect the way in which respondents answer. Throughout the 1990s strides were made to develop standards by which the design of self-administered surveys were governed based on visual design theory and visual heuristics. With the rapid advent, acceptance, and development of web surveys, researchers have sought to understand the best way to visually design individual screens. One issue that has served to provide equivocal results is the placement of clickable buttons on the screen that allow the respondent advance to the next screen and return to a previous screen. In addition to the placement of these buttons, there is no consensus about whether text (“next” and “back”), symbols such as arrows, or some combination is most effective. In this paper we explore variations in button placement and form from a large nationally representative web survey where respondents were randomly assigned one of six experimental treatments

1.Conventional placement, text on buttons (next on the left, back on the right) 2.Switching the location, text on buttons (next on the right, back on the left) 3.Using arrows instead of text on the buttons (next on the left, back on the right) 4.Using arrows instead of text on the buttons (next on the right, back on the left) 5.Using both text and arrows on the buttons (next on the left, back on the right) 6.Using both text and arrows on the buttons (next on the right, back on the left

We examine how well each of these variations perform in terms of response latency, break-offs, mistakes in usage as recorded in the paradata strings, and difference across computers, tablets, and smartphones. The results from this work help to advance our knowledge on best practices for visual design in web surveys in terms of the use of buttons.


4. Improving data quality in telephone interviews by providing a response scale sheet
Dr Christoph Homuth (Leibniz Institute for Educational Trajectories (LIfBi))

Although web based data collection is gaining popularity, computer assisted telephone interviews (CATI) are still the standard method of data collection in the social sciences.
Data quality is crucial for every study and especially for studies that aim at latent variables or other variables of interest that can only be measured indirectly by often lengthy item blocks. Particularly, CATI’s can be cognitive burdensome for participants especially if they contain long questions and complex answering rating scales as respondents must remember all answering options. The usual CATI strategy is to have the interviewer repeat the item scales several times or after every item.

The aim of this contribution is an analysis of a paper scale sheet on data quality, with a special focus on groups that are typically harder to reach like e.g. migrants (refugees or working migrants) or persons from lower social strata. Quality is here broadly defined and contains several facets and dimensions such as e.g. interview duration, interviewees’ motivation rated by interviewers, likelihood of early interview termination, panel attrition, scale reliabilities, and item non-response.

In our panel study, a two-cohort panel study on educational processes of preschool and school children in Bavaria and Hesse, Germany, a CATI was conducted with the students’ parents every year for eight years. And for the second cohort, interviews were also conducted with the students who were aged around sixteen years. Every participant received an invitation letter for an upcoming CATI. In every letter, a paper scale sheet was included with all rating scales that were used in the interview to help the participants to answer the questions correctly by lowering the cognitive burden of remembering each rating scales, shortening the interview time, and generally improving the interview quality. Interviewers would then read the question and provide the scale number as well, e.g. “How much do you agree with the following statements? Please see scale No. 6 on the Scale Sheet.”. As the invitation letters were sent several days or weeks in advance, not all participants had them at hand when the interviewers would call for the interview. This instance is here exploited to examine the effect of the scale sheet on interview quality, as we can compare the group of participants who had the scale sheet during their interview and those who had not.

First results show a statistical significant reduction in interview duration when interviewees had a scale sheet during the CATI. This effect is stronger for persons that were rated by the interviewers to have lower German language skills and for persons with an immigrant background. This effect is robust against the control of social background variables (which themselves correlate with the propensity of having the scale sheet at hand for the interview). Additionally, persons who had the scale sheet were perceived as more focused on the interview and more willing to answer questions. Furthermore, persons that had a scale sheet for the last interview were more likely to be interviewed again.