The impact of questionnaire design on measurements in surveys 5
|Convenor||Dr Natalja Menold (GESIS )|
|Coordinator 1||Ms Kathrin Bogner (GESIS)|
Household financial survey questions often offer ranges to collect some useful data even when respondents are unwilling or unable to provide an exact value. Yet, because self-reported “exact values” are often rounded, bracketed measures may sometimes be more precise than exact numbers. This paper explores the prevalence of rounding, exact values, and item nonresponse on income questions on three surveys with different range alternatives (range cards, user-provided ranges, unfolding brackets) and in different modes. We analyze the determinants of rounding, particularly the role of education, wealth categories, cognition/financial literacy, time to respond (using paradata) and survey mode.
We analyse the effect of an experiment, eliciting respondents’ risk preferences, on the recording behaviour of German consumers in a one week diary on their point-of-sales expenditures. In the experiment, run shortly before the consumers start to fill in the diary, the respondents either win 20 euro or nothing. Our results indicate that the outcome of the game has an impact on the quantity of transactions recorded, but does not affect the quality of information recorded and measures like the cash share.
Numeric codes printed on the questionnaire are often used for a fast and efficient entering of the data. However this procedure can provoke concerns about anonymity that may lead to unit nonresponse, item nonresponse and misreporting. We conducted an experiment in a mail survey on group-focused enmity. Our results show no deviation in case of unit nonresponse. We found an increase in nonresponse to sensitive items between questionnaires with and without codes. There is also a misreporting bias towards socially desirable answers to sensitive questions for questionnaires with a statement to the numeric code in the cover letter.
It is well-known that self-reports about events that occurred long before the interview may not be entirely valid and reliable. Such difficulties we faced during CATI-study on the problem of stray dogs. Data about incidence of dog bites from survey and from medical statistics didn’t match (survey results showed higher incidence). To explain the differences between official statistics and survey results we used hypothesis about respondent’s memory processing and impact of questionnaire from the victimization studies. Three experiments where carried to test hypothesis about ‘telescoping effect’, ‘context effect’ and impact of landmark events in questionnaire