ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     

Tuesday 14th July, 16:00 - 17:30 Room: O-101

Surveying Sensitive Issues: Challenges and Solutions 2

Convenor Mr Marc Hoeglinger (ETH Zurich )
Coordinator 1Professor Andreas Diekmann (ETH Zurich)
Coordinator 2Professor Ben Jann (University of Bern)

Session Details

Surveying sensitive issues such as deviant behavior, stigmatizing traits, or controversial attitudes poses two challenges: The first challenge is data validity. Respondents are likely to misreport when asked sensitive questions, or they refuse answering such questions or even break off the interview. As a result, measurements are biased or incomplete. The second challenge is respondents’ privacy protection. Respondents’ data must be carefully protected to avoid leakage of sensitive personal information. Although this concerns almost all surveys in principle, it becomes much more important when, for instance, highly illegal behavior or political attitudes under repression are surveyed.

Switching to self-administrated survey modes such as online interviews mitigates undesired response effects to some extent. Also, adjusting the questionnaire design and the question wording might attenuate response effects. However, empirical results are inconclusive so far and results seem to depend highly on the particular issue and population surveyed. Providing respondents with full response privacy through indirect techniques such as the Randomized Response Technique or the Item Count Technique is a potential solution to both problems mentioned. However, albeit privacy is completely protected by these methods if properly implemented, respondents often lack understanding of and trust in these methods, so that misreporting might not be reduced.

In this session we invite submissions that deal with problems of surveying sensitive issues and/or present potential solutions. We are interested in studies that evaluate established methods such as indirect question techniques, but also in contributions that come up with novel strategies. Furthermore, we encourage submissions that deal with the concept of “sensitivity” and present theoretical frameworks and/or empirical analyses that shed light on the cognitive process of answering sensitive questions and “editing” responses. Submissions on statistical methods to analyze data from special questioning techniques are also welcomed.

Paper Details

1. Surveying sensitive questions: Prevalence estimates of self-reported delinquency using the crosswise model
Dr Dirk Enzmann (University of Hamburg, Institute of Criminal Sciences)

Prospects and problems of the crosswise model as an alternative to the randomized response technique will be investigated using data of the recent ISRD (International Self-Report Delinquency) study. The paper illustrates the application of the crosswise model in national comparative research for estimating cultural differences in social desirable responding. Results show that using estimates of true answers may result in considerably higher prevalence rates and different causal models of delinquent behavior. Suggestions for improving the method will be discussed.

2. Pouring water into the wine. The advantages of the crosswise model asking sensitive questions revisited.
Ms Sandra Walzenbach (University of Konstanz)
Professor Thomas Hinz (University of Konstanz)

The crosswise model is a rather new method developed to eliminate effects of social desirability when sensitive questions are assessed in surveys. While first empirical studies found it promising, we present a survey experiment based on a general population sample, in which we overcome common limitations of previous research on the topic. By assessing socially desirable instead of negatively connoted behavior, we can clearly distinguish if higher prevalence rates are driven by honest answers or by the respondents’ tendency to select answers randomly. Our results to some extent limit the positive reception the crosswise model has received so far.

3. Effects of Survey Sponsorship and Mode of Administration on Respondents’ Answers about their Racial Attitudes
Professor Volker Stocké (University of Kassel)

This paper analyzes effects of the survey sponsor on respondents’ answers about their attitudes toward ethnic minorities and the processes responsible for this effect. In a split ballot experiment the survey sponsor was either a university or a commercial marketing research firm, and subjects were assigned to an interviewer- or a self-administrated mode of data collection. Respondents from a random probability sample (N=218) answered the blatant- and subtle-prejudice scales after being randomly assigned to one of the four experimental conditions. University sponsorship provoked more positive racial attitude answers. This was only found for the blatant-prejudice scale.

4. The Impact of Survey Mode (Mail versus Telephone) and Asking About Future Intentions
Dr Timothy Beebe (Mayo Clinic)

Evidence suggests that asking about future intentions to get screened for colorectal cancer (CRC) before the actual question about past screening behavior increases the accuracy of self-reports, possibly because respondents are under less social pressure to over-report in this context. We describe the results of two experiments that investigate this, along with survey mode (mail vs. telephone), on self-reported CRC screening accuracy. We found that asking about future intentions significantly lowered reports of past CRC screening in one experiment but not the other. We also observed variable impact of survey mode. Possible reasons for this will be

5. The effect of socio-demographic (mis)match between interviewers and respondents on the data quality of answers to sensitive questions
Dr Anina Vercruyssen (KU Leuven)
Ms Celine Wuyts (KU Leuven)
Professor Geert Loosveldt (KU Leuven)

Interviewer characteristics and interactions with respondents in face-to-face surveys can lead to measurement errors. In interview situations, social desirability bias concerning sensitive questions is also a source of error. We investigate whether homosociality, the tendency to want to interacting with similar people, e.g. similar education level, leads to less error in sensitive questions about income, voting behaviour, political affiliation, and (mental) health issues in the Belgian data of ESS6. Although for some sensitive questions socio-demographic mismatch did not cause much 'noise', some interesting differences in item non-response and interviewer variance are found.