ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Wednesday 17th July 2013, 16:00 - 17:30, Room: No. 1

Social Desirability Bias in Sensitive Surveys: Theoretical Explanations and Data Collection Methods 3

Convenor Dr Ivar Krumpal (University of Leipzig)
Coordinator 1Professor Ben Jann (University of Bern)
Coordinator 2Professor Mark Trappmann (Institute for Employment Research Nürnberg)

Session Details

Survey measures of sensitive characteristics (e.g. sexual behaviour, health indicators, illicit work, voting preferences, income, or unsocial opinions) based on respondents' self-reports are often distorted by social desirability bias. More specifically, surveys tend to overestimate socially desirable behaviours or opinions and underestimate socially undesirable ones, because respondents adjust their answers in accordance with perceived public norms. Furthermore, nonresponse has a negative impact on data quality, especially when the missing data is systematically related to key variables of the survey. Besides psychological aspects (such as a respondent's inclination to engage in impression management or self-deception), cumulative empirical evidence indicates that the use of specific data collection strategies influences the extent of social desirability bias in sensitive surveys. A better data quality can be achieved by choosing appropriate data collection methodologies.

This session has three main goals: (1) discuss the theoretical foundation of the research on social desirability bias in the context of a general theory of human psychology and social behaviour. For example, a clearer understanding of the social interactions between the actors that are involved in the data collection process (respondents, interviewers, and data collection institutions) could provide empirical researchers with a substantiated basis for optimizing the survey design to achieve high quality data; (2) present experimental results evaluating conventional methods of data collection for sensitive surveys (e.g. randomized response techniques and its variants) as well as innovative and new survey designs (e.g. mixed-mode surveys, item sum techniques). This also includes advancements in the methods for statistical analysis of data generated by these techniques; (3) discuss future perspectives for tackling the problem of social desirability and present possible alternative approaches for collecting sensitive data. This may include, for example, record linkage approaches, surveys without questions (e.g. biomarkers), and non-reactive measurement.


Paper Details

1. Mode Differences in Socially Desriable Answers to Sensitive Questions

Professor Michael Traugott (University of Michigan)
Ms Ashley Jardina (University of Michigan)

A significant amount of misinformation about Barack Obama circulates in the American population - about his citizenship, religion, and eligibility to serve as President. An important explanatory factor for these beliefs is racial resentment, beyond partisanship and political ideology. In this paper, we compare levels of socially desirable responses to four questions used to measure racial resentment, a measure of belief about where Obama was born, and a measure of whether he is a Muslim as reported in surveys conducted on the telephone with live interviewers, using IVR techniques on the phone, and through web surveys with different sample designs. In addition to controlling on question wording, in four of the web surveys the exact same questionnaire was used. The analysis also investigates the relationship between racial resentment and beliefs about Obama associated with mode, and in one survey we also analyze the correlates of racial attitudes by media use and attention as well as web use and attention.



2. Social desirability bias caused by image management in social position variables

Miss Ave Roots (University of Tartu)

People are influenced by the interviewer in different ways in case of different questions. There are question, that might seem intrusive for the respondent and therefore cause the respondent to report attitudes and behaviours she/he considers more culturally accepted. There are three types of questions, that cause social desirability bias. First, questions, that are connected to sensitive and taboo topics. These questions are very culture specific, like attitudes towards immigrants, sexual behaviour. Second, there are questions, that are connected to social norms, for example fulfilling the duty of the citizen by voting and following the law. Third, there are questions that link to image management like social position. Different surveys show, that there is less social desirability bias in this kind of questions, when the interviewer is not present and people can answer privately.

This presentation studies the process of image management based on social position. Income is often considered as a very sensitive question, which produces a lot of non-responses and the answers also differ by survey mode. Occupation and education and also self rated social position and economic subsistence are not that sensitive as income, but also connected to the image management.
The aim of this presentation is to see, whether there is social desirability bias and how great it is in reporting objective and subjective social position, by studying how these variables are connected to each other in different data collection modes (CAPI and CAWI).


3. Factors of social desirability across modes: evidence from eye-tracking

Dr Olena Kaminska (University of Essex)
Dr Tom Foulsham (University of Essex)

Most of the survey literature attributes social desirability to deliberate misreporting of a respondent due to lack of comfort, for example an attempt to look better in the eyes of an interviewer. A much less popular strand of literature suggests that social desirability can result from superficial cognitive process (satisficing). Our paper provides an empirical study of whether satisficing is related to social desirability using an innovative method of real-world eye-tracking. The method enables detecting latency of eye gazes in web, face-to-face and paper and pencil self-administered (SAQ) modes. The data collection was completed in the Psychological Lab of the University of Essex in 2012. Through latency of eye gazes we infer attention paid by each respondent to question wording, socially desirable and socially undesirable response scale points. We link the gaze latency measures to responses to understand how respondents arrive at socially desirable or undesirable answers. We find that satisficing is related to social desirability in self-completion modes. Yet it does not explain the higher incidence of social desirability in face-to-face mode.


4. Surveying violence against men: comparing results from CATI (with and without advance letter) and mail questionnaires

Dr Susanne Vogl (Catholic University Eichstaett-Ingolstadt)

It is assumed that reported violence underestimates the actual prevalence of violence in intimate relationships. Intimate partner violence (particularly experienced by men) is considered a sensitive topic and self-reports are likely to be subject to social desirability bias. A number of reasons have been hypothesised why respondents might be reluctant to report victimisation through their partner. There is evidence that specific data collection strategies have an impact on the extent of social desirability bias in sensitive surveys. The research question pursued here is: Does the interview mode (mail versus CATI) and a prenotification letter (CATI) have an impact on the reported prevalence of violence against men.
In a study on violence against men in intimate relationships in 2007 we employed almost identical questionnaires in a CATI (with and without prenotification letter) and a mail survey on the same sample of 18 to 70 years old men living in Bavaria. 1,005 CATI and 200 mail questionnaires have been completed. We compare the three experimental groups: telephone with prenotification, telephone without prenotification and mail questionnaire regarding the reported prevalence of intimate partner violence to draw conclusions on data quality and biases. Findings indicate that the absence of an interviewer increases the frequency of violence reported, so does a prenotification letter. Alongside more detailled analysies of effects on different groups of respondents we hope to promote informed decisions in choosing a data collection method that allows for better data quality.