ESRA 2019 Draft Programme at a Glance
Designing questionnaires – the role of question order on measurement
|Session Organisers|| Mr Alexandre Pollien (FORS, University of Lausanne)
Miss Jessica Herzing (FORS/LINES, University of Lausanne)
|Time||Thursday 18th July, 09:00 - 10:30|
Survey researchers perceive a survey as a conversation between the researcher and the respondent. As in conversations, the thematic context formed by a series of questions implicitly frames the understanding of the respondents. Hence, the order of questions as well as changes of the thematic context between question blocks can affect respondent’s answers (question order effects). Thus, question order effects are a concern for survey practitioners when designing questionnaires.
Yet, there are inconclusive results on how to order questions/question topics in general. The emergence, direction and size of context effects in attitude questions is well understood in the context of the literature on framing (e.g., Schwarz and Sudman 1992). Furthermore, primacy and recency effects have been discussed in the survey methodological literature. However, previous research has shown that question order effects do not exclusively rely on the questionnaire: in other words, the effect occurs by the way of the respondent's activity: levels of education (Sudman and Bradburn, 1974) or more specifically attitude strength (Ajzen et al. 1982) tend to make respondents react differently to the question order. Such perspectives where the question order interacts with the respondent characteristics should be addressed and discussed in this session.
For this session, we invite submissions from experimental studies on question order effects. We are especially interested in approaches which compare different settings of deviations on question order effects, such as cultural comparisons (is the effect the same according to the language, the political context?), mode comparisons (does the effect differ according to the layout of the questions?), comparison of social contexts (attitude strength, social status of respondents), and comparison of questionnaire fatigue (burden of the preceding questions). Furthermore, we encourage submissions, which investigate question order effects on measurement when splitting questionnaires into different parts (thematic or random question order), e.g., surveys with mixed or modular questionnaire design.
Keywords: questionnaire development, question order, spilt questionnaire, matrix questionnaire design, measurement issues, attitude questions
Studying the Priming Effect of Family Norms on Gender Roles’ Attitudes: An Experimental Design
Miss Angelica Maria Maineri (Tilburg University) - Presenting Author
Mrs Vera Lomazzi (Gesis - Leibniz Institute for Social Sciences)
Mr Ruud Luijkx (Tilburg University)
According to the theoretical and empirical literature, the measurement of gender role attitudes is controversial in many ways. Alongside issues of content validity, instruments used by several surveys programmes to measure attitudes towards gender roles appear to be particularly sensitive to cultural bias, increasing the risk of measurement inequivalence. The results of previous empirical studies exploring the quality of the measurement of gender role attitudes included in the fourth wave of the European Values Study (EVS) led us to suppose that the measurement could be sensitive also to priming effect, in particular when previous questions ask the respondent to express normative beliefs concerning family relations.
The current study adopts the theoretical perspective of the construal model of attitudes, which argues that the respondents make use of the most recent available information to interpret the question and express their judgment. From this perspective, the adjacent questions constitute the context for interpreting the scale on gender roles, and could therefore influence the answers.
Our study aims at assessing the priming effect of the family norms questions on the measurement of gender role attitudes by employing data from two experiments fielded in Waves 1 and 5 of the CROss-National Online Survey (CRONOS) panel.
Following the example of previous study assessing the questions order effect, the study explores the possible occurrence of the priming effect and its potential different manifestations in three different countries (Estonia, Great Britain and Slovenia) by adopting several techniques. Looking at the differences across experimental settings and countries, we focus on the gender role attitudes scale and compare its reliability, construct validity, and the model fit of the measurement. Finally we perform multi-group confirmatory factor analysis to investigate whether the potential order effect affect the measurement equivalence across experimental settings and across countries.
Experimental Study of Different Formulations and Order of Questions Concerning Propensity to Invest
Ms Weronika Boruc (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
My presentation concerns the results of experimental study comparing different formulations and different order of similar questions concerning the willingness to invest potential financial resources in a (more or less) risky investment, which have been used German Socio-Economic Panel (SOEP) and Polish Panel Survey (POLPAN). These questions have been asked in these two panel surveys in different contexts: in SOEP the “investment” question was preceded by self-assessment of risk attitudes, in POLPAN it was asked in the framework of willingness to become an entrepreneur or attitudes towards privatization. My analyses of POLPAN and SOEP data lead to conclusion that the context in which these questions were asked have a significant influence on respondents’ answers. This concerns both the general declaration of willingness to invest, as well the declared sums of money of potential investment.
The presentation will demonstrate results of experimental study conducted in order to analyze the effects of different formulation and context of the questions. Dividing the experiment participants into two groups and asking them similar questions concerning investment propensity, with different questions preceding it, allows to create different contexts for answering the main question, as well as observing how different formulations of this question influences their answers.
Respondents attention and response behavior: Comparing positioning effects of a scale on impulsive behavior
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Previous research has shown that the quality of data in surveys is affected by questionnaire length. With an increasing number of survey questions that respondents have to answer, they can become bored, tired and annoyed. This may increase respondents’ burden and decrease their motivation to provide meaningful answers which might lead to an increased risk of showing satisficing behavior.
This paper investigates effects of item positioning on data quality in a web survey.
In a lab experiment employing eye-tracking technology, 130 respondents answered a grid question on impulsive behavior that consists of eight items and a five-point response scale. The question was randomly provided either in the beginning or at the end of the web questionnaire.
The position of the question was predicted to influence a variety of indicators of data quality and response behavior in the web survey: item nonresponse, response times, response differentiation, as well as measures of attention and cognitive effort operationalized by fixation counts and fixation times. In addition, it is investigated whether the position of the scale on impulsive behavior affects the comparability of correlations with other personality variables.
Comparison through different questionnaires: context effect and order effects in the measurement of values
Miss Jimena Sobrino Piazza (SSP) - Presenting Author
Mr Alexandre Pollien (FORS)
The meaning of a question is not entirely embedded in the question itself. The words used in questions are supported by external references, which help respondents to interpret the question meaning and retrieve information in memory. When, for example, the concept of democracy is evoked in a question, respondents ascribe a meaning according to what they think is meant by "democracy" and contrast it with their own experience of what they think “democracy is”. The way in which the survey is introduced, together with the content of previous questions may act as anchors and may also serve as frame of reference. These phenomena raise therefore problems of comparison when a same question is asked through different questionnaires: split ballot or matrix design, questions repeated in another survey. This range of problems is referred as “context effect” in methodology literature.
We will address the issue of context effects in the case of EVS 2017 (European Value Study) conducted in Switzerland. To reduce the length of the questionnaire, we developed a matrix design. This consisted in splitting the questionnaire in different question sets so that sample units are randomly assigned to half of the questions included in the questionnaire. This kind of design avoids introducing additional selection effects between two modules surveyed in two times (first half / second half), but introduces differences in the context in which many questions are asked, as a consequence of the distinct combinations. After a typology of the context effects that we encountered in EVS and an assessment of the global quality of the survey, we will analyse more in depth some examples of context effects and provide guidelines to avoid them at most when designing questionnaires.