ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 09:00 - 10:30, Room: No. 1

Social Desirability Bias in Sensitive Surveys: Theoretical Explanations and Data Collection Methods 4

Convenor Dr Ivar Krumpal (University of Leipzig)
Coordinator 1Professor Ben Jann (University of Bern)
Coordinator 2Professor Mark Trappmann (Institute for Employment Research Nürnberg)

Session Details

Survey measures of sensitive characteristics (e.g. sexual behaviour, health indicators, illicit work, voting preferences, income, or unsocial opinions) based on respondents' self-reports are often distorted by social desirability bias. More specifically, surveys tend to overestimate socially desirable behaviours or opinions and underestimate socially undesirable ones, because respondents adjust their answers in accordance with perceived public norms. Furthermore, nonresponse has a negative impact on data quality, especially when the missing data is systematically related to key variables of the survey. Besides psychological aspects (such as a respondent's inclination to engage in impression management or self-deception), cumulative empirical evidence indicates that the use of specific data collection strategies influences the extent of social desirability bias in sensitive surveys. A better data quality can be achieved by choosing appropriate data collection methodologies.

This session has three main goals: (1) discuss the theoretical foundation of the research on social desirability bias in the context of a general theory of human psychology and social behaviour. For example, a clearer understanding of the social interactions between the actors that are involved in the data collection process (respondents, interviewers, and data collection institutions) could provide empirical researchers with a substantiated basis for optimizing the survey design to achieve high quality data; (2) present experimental results evaluating conventional methods of data collection for sensitive surveys (e.g. randomized response techniques and its variants) as well as innovative and new survey designs (e.g. mixed-mode surveys, item sum techniques). This also includes advancements in the methods for statistical analysis of data generated by these techniques; (3) discuss future perspectives for tackling the problem of social desirability and present possible alternative approaches for collecting sensitive data. This may include, for example, record linkage approaches, surveys without questions (e.g. biomarkers), and non-reactive measurement.


Paper Details

1. Measurability of desirability: bias-control with social desirability scales

Ms Susanne Ehrlich (University of Passau)
Professor Horst-alfred Heinrich (University of Passau)

SD-scales are common methods to control social desirability in surveys. The project to be presented questions the effectiveness of two control scales. The SD-scales of Stocké (2003) as well as of Winkler, Kroh, and Spiess (2006) are tested against each other because they are often applied in German survey research. Both are shorter versions of the well-known scales developed by Crowne and Marlowe (1960) and by Paulhus (1991). As a result of a former study (Ehrlich 2010) we doubt both scales' validity.
In this paper, first, the two scales' theoretical background is reconstructed. Marlowe and Crowne perceive social desirability as result of individuals' need of social approval whereas Paulhus defines it as result of defense mechanisms. Furthermore, one scale represents a unidimensional construct, the other distinguishes between two different types of social desirability.
Insofar both scales claim to reflect the same construct, we tested them against each other. They were included in a survey dealing with right-wing extremism. Data were collected in 2008 (N=987). External validation was achieved by comparison with a scale testing attitudes towards foreigners. Results indicate that both scales do not measure the same construct. Consequently, we discuss whether we have to perceive social desirability as a mass phenomenon because a remarkable proportion of respondents gave social desirable answers. On the contrary, we have to question our operationalizations of the concept.


2. The Effects of Social Distance and Question Reading on Social Desirability Response Bias

Mrs Marieke Haan (University of Groningen)
Dr Yfke Ongena (University of Groningen)
Professor Kees De Glopper (University of Groningen)

Interviewer presence or absence can affect respondents' response behaviors. Respondents may want to portray themselves as norm-following citizens because they are afraid of what interviewers might think of uncommon responses. Furthermore, respondents can give answers of which they think are useful for researchers; resulting in social desirability response bias (SDRB). Also, interviewers' verbal behavior could affect answering behavior. Interviewers should follow the standardized interviewing rules, but often deviate from question reading.

This study focuses on differences in SDRB in f2f interviews, web surveys, and video-enhanced web surveys in the European Social Survey. The video mode contains pre-recorded clips of an interviewer reading questions; response options are presented on the computer screen, and interviewer-respondent interaction is not possible. Four questions taken from the Marlowe-Crowne scale were used to get an impression of respondents' sensitivity to social desirability effects. Questions are selected relating to social norms. To measure SDRB, high numbers of socially desirable answers, and low numbers of socially undesirable answers are investigated. Additionally, the f2f interviews are transcribed to study effects of question reading on answering behaviors.

It is expected that SDRB is stronger in video-enhanced surveys than in text-based web surveys. Most SDRB will probably be found in f2f interviews. Furthermore, we expect that minor question reading deviations more often lead to an adequate answer than when the question is exactly read as written. We also expect that respondents add explanations to their answers when interviewers try to formulate questions more polite.



3. Evaluating the psychometric properties of the Marlowe-Crowne Social Desirability Scale in internet format using Item Response Theory (IRT) and intensive individual interviews.

Ms Vaka Vésteinsdóttir (University of Iceland)
Professor Ulf-dietrich Reips ((1) University of Deusto, Spain (2) IKERBASQUE, Basque Foundation for Science, Spain)
Professor Adam Joinson (UWE Bristol )
Dr Fanney Þórsdóttir (University of Iceland)

The most widely used measure of social desirability is the Marlowe-Crowne Social Desirability Scale (MC-SDS), developed in paper-and-pencil format by Crowne & Marlowe (1960) during the 50's and published in 1960. In spite of the common use the MC-SDS to validate other measures it has not been sufficiently validated (Leite & Beretvas, 2005), neither in the original paper-and-pencil format nor have any published studies evaluated the psychometric properties of the MC-SDS in internet format. Confirmatory factor analyses have revealed some very low factor loadings in paper-and-pencil format (Ventimiglia & MacDonald, 2012) and very low factor loading are also apparent in the internet format. This calls for a reexamination of the MC-SDS items, which have remained unchanged since the scales publication. The aim of the current research is to take a closer look at the scales content by applying item response theory to examine how the latent characteristic, assessed by the MC-SDS, appears in item responses. Intensive individual interviews were also used to gain insight into what respondents are thinking while answering each item. The sample (n=743) used to analyze item responses was recruited through communication sites and e-mail by snowball sampling. The questionnaire contained two personality tests, background questions and an Icelandic version of the MC-SDS (original data from Ragnarsson, Ólason & Halldórsson, 2011). Individual interviews were conducted on 20 participants per question on the MC-SDS.