ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     




Wednesday 15th July, 11:00 - 12:30 Room: HT-104


The impact of questionnaire design on measurements in surveys 2

Convenor Dr Natalja Menold (GESIS )
Coordinator 1Ms Kathrin Bogner (GESIS)

Session Details

Questionnaire design is crucial for obtaining high-quality survey data. Still, there is a great need for research that helps to better understand how and under which conditions different design aspects of questionnaires impact measurement process and survey data quality. Therefore, researchers are invited to submit papers dealing with questionnaire design features such as question wording, visual design and answer formats, instructions, introductions and other relevant design aspects of questionnaires. Also, different means of measurement such as questions with nominal answer categories, rankings, ratings, sematic differentials or vignettes can be addressed or can be matter of comparison. Of interest is the impact of questionnaire design on response behavior, on systematic as well as non-systematic error or on validity. In addition, respondents’ cognition or motivation can be in focus of the studies.

Paper Details

1. Smart Respondents: let´s keep it short.
Mrs Inna Becher (LINK Institute for Market and Social Research, Zurich)

Survey questions in web surveys often include additional instructions for respondents regarding the use of the scale. Over the whole questionnaire, such instructions can considerably increase the length of the questionnaire. In many cases, however, these scale instructions are unnecessary, because the fully labeled answer scale is provided as part of the visually displayed response options.
We use a survey data to demonstrate that respondents have rational strategies when completing the survey and skip the scale descriptions when they have enough information in the answer scale displayed. Thus, explanations of the scales in web surveys can often be omitted.


2. From web to paper: evaluation from data providers and data analysts. The case of annual survey finances of enterprises
Mrs Deirdre Giesen (Statistics Netherlands)

One of the main features of web surveys is that they can be interactive and provide various ways to improve the quality of the collected data by including e.g. controls. Some research has been done on how these features can best be implemented in business surveys. However, still many statistical agencies are struggling with how to best design their web questionnaires for business surveys. This paper evaluates how both data providers and the users of the raw data (editors and analysts) evaluate the transition from paper to web in general and the use of controls and codes specifically.



3. Is variation in perceptions of inequality and redistribution actual or artifactual? Effects of wording, order, and number of items.
Dr Kinga Wysieńska-di Carlo (Institute of Philosophy and Sociology, Polish Academy of Sciences)
Dr Zbigniew Karpiński (Institute of Philosophy and Sociology, Polish Academy of Sciences)

We conducted a methodological experiment in which we systematically manipulated the features of questionnaire items traditionally used to measure perception of and attitudes towards inequalities and redistributive policies. Specifically, we manipulated the order of questions regarding perceived inequalities, support for redistribution, and respondents’ views on tax progressivity. We also manipulated the number and specificity of occupational titles used to assess individual perceptions and legitimacy of income inequalities. The results of the experiment were used to test hypotheses regarding the degree of between-subject agreement, the stability of answers to income inequality questions and the degree of support for redistribution.


4. Does the Position of Non-cognitive Tests in the Questionnaire Affects Data Quality?
Dr Rafael Novella (Inter-American Development Bank)

This paper investigates whether the position in the questionnaire of non-cognitive tests affects data quality. To test this we randomly assigned two sets of questionnaires, among nearly five thousand individuals: the first having the tests as the sixth module out of 19; and, the second having them as the fifteenth one. This experiment was conducted during the data collection of the first follow-up of a randomized control trial on labor training in Chile. The experimental setting allow us to establish a causal relationship between the position of the tests and data quality.


5. Exploring a new way to avoid errors in attitude measurements due to complexity of “scientific” terms: an example with the term biodiversity
Miss Léïla Eisner (University of Lausanne)
Professor Caroline Roberts (FORS, University of Lausanne)

The aim of this research was to test the influence of question wording on the quality of a survey. In particular, the focus was on errors in attitude measurements due to unfamiliarity or complexity of scientific terms used in questions. In order to measure attitude towards “biodiversity”, the words that respondents associate with biodiversity were first identified, based on statistical analyses of answers to an open-ended question. Items were then composed by these words. These items were tested in two versions of a questionnaire, which has been distributed in January 2015 to urban residents of Geneva (N=2000).