ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Restarting the debate on unipolar vs. bipolar rating scales 1

Session Organisers Dr Mario Callegaro (Google Cloud)
Dr Yongwei Yang (Google)
TimeTuesday 18 July, 11:00 - 12:30
Room U6-01b

With few exceptions (e.g. Höhne, Krebs, and Kühnel, 2022) scale polarity research is lacking in the past few years of survey research on rating scales. Scale polarity is a decision to make when writing a questionnaire “is theoretical, empirical, and practical. “ (Schaeffer & Dykema, 2020, p.40). The second decision to make is to decide how many scale points a unipolar or bipolar scale should have. Other decisions are the use of labels (if fully labeled or endpoint labeled) and if using numbers associated with the scale point.
In this session we want to restart the debate on scale polarity and its effect on data quality (DeCastellarnau, 2018).

More specifically we are looking at contributions to this topic such as:

Didactical or empirical studies aiming at clarifying the polarity nature of key constructs
Empirical studies on the data quality and/or practical utility of using bipolar vs. unipolar question and scale design
Impact of question (e.g., balanced wording) and answer scale design choices (# of scale points, choice of labeling, scale orientation, etc.)
Understanding the “why” (e.g., through asking or observing respondents)
Cultural/language generalizability or moderators/mediators
Studies using samples other than opt-in online panels
Mode effects, if any on visual vs auditory presentation of the scales
Validity and reliability of the two scale formats
Systematic reviews
Meta-analytic studies

DeCastellarnau, A. (2018). A classification of response scale characteristics that affect data quality: A literature review. Quality & Quantity, 52, 1523–1559.

Höhne, J. K., Krebs, D., & Kühnel, S.-M. (2022). Measuring Income (In)equality: Comparing Survey Questions With Unipolar and Bipolar Scales in a Probability-Based Online Panel. Social Science Computer Review, 40, 108–123.

Schaeffer, N. C., & Dykema, J. (2020). Advances in the science of asking questions. Annual Review of Sociology, 46.

Keywords: rating scales, unipolar, bipolar

Papers

Bipolar and unipolar response scales in translated surveys: the importance of source analysis for multilingual surveys

Mr Musab Hayatli (cApStAn) - Presenting Author
Dr Dorothée Behr (GESIS - Leibniz Institute for the Social Sciences)

The debate on scale polarity and its impact on data quality is far from settled. Moreover, there does not seem to be a clear agreement as to constitute a unipolar or a bipolar scale. And while the debates continue, and as language specialists, we feel the debate would be incomplete without reference to the language aspect of this debate, both in English, the language in which most of the surveys we encounter are written, or in a multitude of other languages to which we translate these surveys. Item writers carefully select their options to reflect the type of data they would like to collect to ensure the quality of the data and subsequent research. Yet the translated equivalent of the terms selected carefully, in English—terms such as ‘Satisfied vs dissatisfied’, ‘helpful vs unhelpful’, ‘somewhat vs to a certain extent’ for example—may not carry the same value or correspond neatly to the intended purpose. ‘Dissatisfied’, for example, must be translated as ‘not satisfied’ in some languages.

In this presentation we will share examples of responses commonly used in scale options in surveys and analyze their semantic value in source. We will then discuss these values vis a vis their commonly used equivalent in several selected languages. We hope this presentation will add another dimension to the debate at hand and help illustrate the impact of translated surveys on outcome and potentially on survey design.


Agree with Who?: Acquiescence Bias in Agreement Question Types is More Likely a Polarity Effect

Ms Megan Hendrich (Ipsos US Public Affairs) - Presenting Author
Professor Randall Thomas (Ipsos US Public Affairs)

Acquiescence bias describes the tendency for survey respondents to choose an ‘agreeable’ response rather than a response that more accurately reflects their views. Acquiescence bias is believed to be more prominent with agreement question types and less likely with item-specific question types (e.g., Krosnick & Presser, 2010). However, one issue confounding tests comparing agreement and item-specific question types is response polarity (generally bipolar for agreement vs. unipolar for item-specific; see Dykema et al., 2021). Our previous studies have found no meaningful differences in response distributions and criterion-related validity for agreement and item-specific question types when using the same polarity. However, those studies used short, simple items, while agreement scales are often used to assess attitudes toward complex ideas requiring more cognitive processing to respond. The current study employed a 2x2 factorial design, comparing unipolar and bipolar response formats and agreement and item-specific question types using questions about political and social attitudes. We had 4,200 respondents from a probability-based panel complete an online survey. Respondents were randomly assigned to one of the four conditions, and response option order was randomized within conditions (i.e., least to most, most to least). Respondents also completed a series of criterion-related measures believed to be correlated with the attitudes of interest. When using the same polarity, response distributions for agreement and item-specific question types were similar. Both agreement and item-specific question types had higher endorsement for the top two responses with bipolar formats and lower endorsement for the top two responses with unipolar formats. Additionally, we found very little difference in the criterion-related validity between the two question types. Based on these results, acquiescence bias in agreement question types appears to be illusory and has been confused with polarity differences.


Neither here nor there? Respondents’ explanation for choosing the midpoint in bipolar scales

Dr Eva Aizpurua ( ) - Presenting Author
Dr Carmen María León (University of Castilla-La Mancha)

In bipolar scales, the midpoint acts as a transitional point between the two poles measured (e.g., satisfied/dissatisfied, good/bad). The way respondents use it, however, remains unclear, with midpoint answers having the potential to capture true neutral responses, but also be an indication of uncertainty, ambivalence, lack of opinion or satisficing behavior. In this paper, we examine the reasons provided by responses for choosing midpoint responses in two bipolar scales (favor/oppose) measuring attitudes toward crime reporting. The data come from an online survey fielded in February 2022 using the Netquest panel in Spain (N = 1,603). Respondents who provided midpoint responses (n = 142 and n = 107) received an open-ended probe asking them to provide further details on why they chose such responses. Their answers were coded and classified into substantive and non-substantive responses (e.g., don’t know, no opinion). The results of this paper will contribute to a body of research investigating the meaning that midpoint responses have for respondents and the extent to which they represent true neutral answers. The implications of these findings for the analysis of these two questions (e.g., treating variables as ordinal, recoding midpoint responses) are also discussed.