ESRA logo

ESRA 2019 glance program


Designing Questionnaires – The Role of Question Order on Measurement

Session Organisers Mr Alexandre Pollien (FORS, University of Lausanne)
Miss Jessica Herzing (FORS/LINES, University of Lausanne)
TimeThursday 18th July, 09:00 - 10:30
Room D18

Survey researchers perceive a survey as a conversation between the researcher and the respondent. As in conversations, the thematic context formed by a series of questions implicitly frames the understanding of the respondents. Hence, the order of questions as well as changes of the thematic context between question blocks can affect respondent’s answers (question order effects). Thus, question order effects are a concern for survey practitioners when designing questionnaires.
Yet, there are inconclusive results on how to order questions/question topics in general. The emergence, direction and size of context effects in attitude questions is well understood in the context of the literature on framing (e.g., Schwarz and Sudman 1992). Furthermore, primacy and recency effects have been discussed in the survey methodological literature. However, previous research has shown that question order effects do not exclusively rely on the questionnaire: in other words, the effect occurs by the way of the respondent's activity: levels of education (Sudman and Bradburn, 1974) or more specifically attitude strength (Ajzen et al. 1982) tend to make respondents react differently to the question order. Such perspectives where the question order interacts with the respondent characteristics should be addressed and discussed in this session.
For this session, we invite submissions from experimental studies on question order effects. We are especially interested in approaches which compare different settings of deviations on question order effects, such as cultural comparisons (is the effect the same according to the language, the political context?), mode comparisons (does the effect differ according to the layout of the questions?), comparison of social contexts (attitude strength, social status of respondents), and comparison of questionnaire fatigue (burden of the preceding questions). Furthermore, we encourage submissions, which investigate question order effects on measurement when splitting questionnaires into different parts (thematic or random question order), e.g., surveys with mixed or modular questionnaire design.

Keywords: questionnaire development, question order, spilt questionnaire, matrix questionnaire design, measurement issues, attitude questions

Studying the Priming Effect of Family Norms on Gender Roles’ Attitudes: An Experimental Design

Miss Angelica Maria Maineri (Tilburg University) - Presenting Author
Mrs Vera Lomazzi (Gesis - Leibniz Institute for Social Sciences)
Mr Ruud Luijkx (Tilburg University)

According to the theoretical and empirical literature, the measurement of gender role attitudes is controversial in many ways. Alongside issues of content validity, instruments used by several surveys programmes to measure attitudes towards gender roles appear to be particularly sensitive to cultural bias, increasing the risk of measurement inequivalence. The results of previous empirical studies exploring the quality of the measurement of gender role attitudes included in the fourth wave of the European Values Study (EVS) led us to suppose that the measurement could be sensitive also to priming effect, in particular when previous questions ask the respondent to express normative beliefs concerning family relations.
The current study adopts the theoretical perspective of the construal model of attitudes, which argues that the respondents make use of the most recent available information to interpret the question and express their judgment. From this perspective, the adjacent questions constitute the context for interpreting the scale on gender roles, and could therefore influence the answers.
Our study aims at assessing the priming effect of the family norms questions on the measurement of gender role attitudes by employing data from two experiments fielded in Waves 1 and 5 of the CROss-National Online Survey (CRONOS) panel.
Following the example of previous study assessing the questions order effect, the study explores the possible occurrence of the priming effect and its potential different manifestations in three different countries (Estonia, Great Britain and Slovenia) by adopting several techniques. Looking at the differences across experimental settings and countries, we focus on the gender role attitudes scale and compare its reliability, construct validity, and the model fit of the measurement. Finally we perform multi-group confirmatory factor analysis to investigate whether the potential order effect affect the measurement equivalence across experimental settings and across countries.


Experimental Study of Different Formulations and Order of Questions Concerning Propensity to Invest

Ms Weronika Boruc (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author

My presentation concerns the results of experimental study comparing different formulations and different order of similar questions concerning the willingness to invest potential financial resources in a (more or less) risky investment, which have been used German Socio-Economic Panel (SOEP) and Polish Panel Survey (POLPAN). These questions have been asked in these two panel surveys in different contexts: in SOEP the “investment” question was preceded by self-assessment of risk attitudes, in POLPAN it was asked in the framework of willingness to become an entrepreneur or attitudes towards privatization. My analyses of POLPAN and SOEP data lead to conclusion that the context in which these questions were asked have a significant influence on respondents’ answers. This concerns both the general declaration of willingness to invest, as well the declared sums of money of potential investment.
The presentation will demonstrate results of experimental study conducted in order to analyze the effects of different formulation and context of the questions. Dividing the experiment participants into two groups and asking them similar questions concerning investment propensity, with different questions preceding it, allows to create different contexts for answering the main question, as well as observing how different formulations of this question influences their answers.


Respondents Attention and Response Behaviour: Comparing Positioning Effects of a Scale on Impulsive Behaviour

Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author

Previous research has shown that the quality of data in surveys is affected by questionnaire length. With an increasing number of survey questions that respondents have to answer, they can become bored, tired and annoyed. This may increase respondents’ burden and decrease their motivation to provide meaningful answers which might lead to an increased risk of showing satisficing behaviour.

This paper investigates effects of item positioning on data quality in a web survey.
In a lab experiment employing eye-tracking technology, 130 respondents answered a grid question on impulsive behaviour that consists of eight items and a five-point response scale. The question was randomly provided either in the beginning or at the end of the web questionnaire.

The position of the question was predicted to influence a variety of indicators of data quality and response behaviour in the web survey: item nonresponse, response times, response differentiation, as well as measures of attention and cognitive effort operationalized by fixation counts and fixation times. In addition, it is investigated whether the position of the scale on impulsive behaviour affects the comparability of correlations with other personality variables.


Split questionnaire design: can a question battery be split and still produce the same measurements? Evaluating context effects in the study of moral values

Miss Jimena Sobrino Piazza (SSP) - Presenting Author
Mr Alexandre Pollien (FORS)
Professor Caroline Roberts (University of Lausanne)

The meaning of a question is not entirely embedded in the question itself. Among the many factors that may influence the way respondents understand and answer a given question, the questionnaire content may play an important role. Earlier questions in the questionnaire generate a context in which questions are embedded. The meaning of questions, as well as the ideas and the standards of comparison respondents consider to answer them, are all influenced by questions previously answered in the questionnaire. This raises problems of comparison when a same question is asked in different questionnaires. This range of problems have been referred as “context effect” in methodology literature.

In a moment in which the split questionnaire design (questionnaire modularization) becomes increasingly popular, this study aims to contribute with the refining of splitting strategies, in order to prevent the introduction of context effects. A particular question setting will be studied: question batteries in which items related to different constructs are intermixed, sharing a common rating scale. Can a question battery be split and still produce the same measurements, both at the item level and scale level? The question battery on moral beliefs of the Swiss EVS 2017 (European Values Study) will be analysed. This question battery was administered in two versions, as a result of a split questionnaire design. Measurements of whole and split versions of the question battery will be compared. First, at an item level of analysis, estimates of the means will be analysed. At a multi-item level, a reliability test and a multi-group confirmatory factor analysis (MCFA) will be conducted in order to reveal eventual differences in the consistency and factor structures of the multi-item constucts.