ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     

Wednesday 15th July, 09:00 - 10:30 Room: HT-104

The impact of questionnaire design on measurements in surveys 1

Convenor Dr Natalja Menold (GESIS )
Coordinator 1Ms Kathrin Bogner (GESIS)

Session Details

Questionnaire design is crucial for obtaining high-quality survey data. Still, there is a great need for research that helps to better understand how and under which conditions different design aspects of questionnaires impact measurement process and survey data quality. Therefore, researchers are invited to submit papers dealing with questionnaire design features such as question wording, visual design and answer formats, instructions, introductions and other relevant design aspects of questionnaires. Also, different means of measurement such as questions with nominal answer categories, rankings, ratings, sematic differentials or vignettes can be addressed or can be matter of comparison. Of interest is the impact of questionnaire design on response behavior, on systematic as well as non-systematic error or on validity. In addition, respondents’ cognition or motivation can be in focus of the studies.

Paper Details

1. Designs and Developments of the Income Measures in the European Social Surveys
Dr Uwe Warner (Perl, Germany)
Professor Jürgen H.p. Hoffmeyer-zlotnik (University of Giessen)

In social surveys “total net household income” is an indicator of the socio-economic status. It is used as an explanatory variable in mobility studies and as a social-demographic background item in inequality research. In social sciences, the income brackets are usually good enough for a comparative analysis of social structures.
The question design and the answer categories have to fulfill quality requirements: all possible payments accruing to a household and all its members must be reported in references; all households in the survey’s universe must be represented in the statistics used to detect the the answer categories.

2. Item non-response and readability of survey questionnaire
Dr Mare Ainsaar (Senior research fellow)
Mr Laur Lilleoja (PhD student)
Dr Jaan Mikk (Senior research fellow)

The presentation analyses the influence of readability of European Social Survey 2010 (ESS) questionnaire items on item non-response in two countries and in two languages - in the English language for Great Britain and in Estonian language for Estonia from 2010. The comparison of questionnaires of several languages enables finding more universal results and avoiding results that are specific for just one language or one country.
We found clear evidence of the influence of questionnaire readability on item non-response. Results also allow assuming, that the relationships between item readability characteristics and item non-response might be society -dependent.

3. Helping Respondents Provide Good Answers in Web Surveys
Professor Mick Couper (University of Michigan)
Dr Chan Zhang (Fudan University)

We report on our third experiment comparing text boxes, drop down lists and JavaScript lookup for entering complex responses. In the 2013 Health and Retirement Study Internet Survey we asked respondents to enter the names of up to 5 prescription drugs they are taking. Over 4,000 respondents were randomly assigned to one of the three input methods. We compare both the quality of answers and the effort (time) taken to provide such answers. We examine differences in performance by key respondent demographics and Internet experience, discuss some of the technical challenges, and offer some design recommendations.

4. Rating scale labeling in web surveys: Are numeric labels of advantage?
Dr Natalja Menold (GESIS)

Despite some previous evidence that numeric labels in rating scales might be associated with lower data quality, numeric labels have been broadly used in social science surveys. In the present paper the differences in respondents’ burden und reliability between the rating scales with numeric and verbal labels in web surveys is addressed. The results of the first eye-tracking study show that respondents’ burden were higher in the case of numeric than in the case of verbal labels. In the second study lower reliabilities were obtained for rating scales with numeric labels than for those with verbal labels.