ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 09:00 - 10:30 Room: Q4 ANF1


Researching Sensitive Topics: Improving Theory and Survey Design 1

Chair Dr Ivar Krumpal (University of Leipzig )
Coordinator 1Professor Ben Jann (University of Bern)
Coordinator 2Professor Mark Trappmann (IAB Nürnberg)

Session Details

Social desirability bias is a problem in surveys collecting data on private issues, deviant behavior or unsocial opinions (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia) as soon as the respondents’ true scores differ from social norms. Asking sensitive questions poses a dilemma to survey participants. On the one hand, politeness norms may oblige the respondent to be helpful and cooperative and self-report the sensitive personal information truthfully. On the other hand, the respondent may not trust in his or her data protection and may fear negative consequences from self-reporting norm-violating behavior or opinions. Cumulative empirical evidence shows that in the context of surveying sensitive issues respondents often engage in self-protective behavior, i.e. they either give socially desirable answers or they refuse to answer at all. Such systematic misreporting or nonresponse leads to biased estimates and poor data quality of the entire survey study. Specific data collection approaches were proposed to increase respondents’ cooperation and improve validity of self-reports in sensitive surveys.

This session is about deepening our knowledge of the data generation process and advancing the theoretical basis of the ongoing debate about establishing best practices and designs for surveying sensitive topics. We invite submissions that deal with these problems and/or present potential solutions. In particular, we are interested in studies that (1) reason about the psychological processes and social interactions between the actors that are involved in the collection of the sensitive data; (2) present current empirical research focusing on ‘question-and-answer’ based (e.g. randomized response techniques, factorial surveys), non-reactive (e.g. record linkage approaches, field experiments or administrative data usage) or mixed methods of data collection (e.g. big data analyses in combination with classical survey approaches) focusing on the problem of social desirability; (3) deal with statistical procedures to analyze data generated with special data collection methods; (4) explore the possibilities and limits of integrating new and innovative data collection approaches for sensitive issues in well-established, large-scale population surveys taking into account problems of research ethics and data protection.

Paper Details

1. The effects of social desirability and social undesirability on response latencies in surveys
Mr Henrik Andersen (Technische Universität Kaiserslautern)
Dr Jochen Mayerl (Technische Universität Kaiserslautern)

The validity of responses to sensitive questions has been a topic in survey research for several decades. Within the context of sensitive questions, the effects of social desirability are generally the most often looked at type of response effects. Social desirability refers to the tendency of respondents to overstate positive behaviours or characteristics and understate negative ones (cf. Holtgraves 2004).
Various attempts have been made to assess the extent to which responses are a source of bias in survey results, and to develop ways to avoid having results coloured by social desirability. Besides classical survey methods such as anonymous interview settings (sealed envelopes), the implementation of need for social approval- and trait desirability-scales in surveys, some techniques designed specifically to encourage respondents to answer truthfully are the randomized response or the item count, faking instructions or bogus pipeline techniques, for example. However, doubts have been cast about the effectiveness of these techniques in eliciting more valid responses (cf, Wolter; Preisendörfer 2013 among others). Therefore, researchers have turned to other techniques to identify socially desirable responses as well as to gain a better understanding of why and how people answer in a socially desirable way. One such technique involves analyzing paradata collected about the survey process, often in the form of response latencies of answers. In this way, response latencies are used as proxies, working with cognitive information processing theoretical frameworks, to infer information processing modes.
So far, evidence is conflicted as to whether socially desirable responding is indicated by shorter or longer response latencies. This paper looks to contribute a better understanding response latencies and their application in identifying bias in surveys.
We concentrate on both respondent and item-related characteristics (need for social approval and trait desirability, respectively) in a multilevel regression analysis and attempt to use them to predict response latencies. On a theoretical level, we integrate several competing but ultimately compatible explanatory models of response behavior into a more general framework. The analysis is based on data collected in CASI surveys (n=550) in which respondents took part in groups in a controlled supervised survey situation.
Our findings indicate that it is important to differentiate between socially desirable and socially undesirable attitudes and behaviour as they seem to elicit completely separate types of responses. Clearly desirable attitudes and behavior elicit fast response times while clearly undesirable ones lead to generally slower responses. We link these findings to the theoretical and empirical debates about the use of response latencies as proxies for information processing modes and contribute to a more effective use of them in identifying response bias on the grounds of social desirability.
References:
Holtgraves, T. (2004): Social Desirability and Self-Reports: Testing Models of Socially Desirable Responding. In: Personality and Social Psychology Bulletin, Vol. 30, No. 2, 161-172.
Wolter, F.; Preisendörfer, P. (2013): Asking Sensitive Questions: An Evaluation of the Randomized Response Technique Versus Direct Questioning Using Individual Validation Data. In: Sociological Methods and Research, Vol. 00, Nr. 0: 1-33.


2. Measurement and Mismeasurement of Abortion and Other Pregnancy Outcomes
Ms Rachel Scott (London School of Hygiene and Tropical Medicine)
Dr Laura Lindberg (Guttacher Institute, New York)

Despite its frequency in the U.S., abortion remains a highly sensitive, stigmatized and thus difficult-to-measure behaviour. Furthermore, underreporting is not random; some groups are less likely to report their abortions than others. Less is known about reporting of other pregnancy outcomes. Underreporting means that we have an incomplete, and possibly biases, picture of not only abortions but also pregnancies in the US. Research is needed to understand who is underreporting and why, and to assess the potential biases in pregnancy data in nationally representative surveys. The National Survey of Family Growth (NSFG) uses audio computer assisted self interviews (ACASI) to measure abortion, in addition to face-to-face (FTF) interviews, in order to elicit more complete reporting. We analyse data from the 2002, 2006-2010, and 2011-13 NSFGs to examine the effectiveness of the ACASI in improving reporting or abortion, and consider other factors which may influence the sensitivity of abortion reporting. We capitalize on reporting differences by pregnancy outcome (abortion, live birth, miscarriage), reporting mode (FTF v. A-CASI), retrospective reporting period (lifetime v. last 5 years), and time period (2002, 2006-2008, 2008-2010, 2011-2013). Reporting of abortions was higher using the ACASI, suggesting that privacy and stigma are important factors in women's willingness to disclose abortions. The ACASI elicited relatively more abortions among non-white women and low income women, suggesting that stigma may be felt differently by different groups. For all pregnancy outcomes, the ACASI elicited relatively more reporting where a five year, as opposed to a lifetime, recall period was used. Survey factors might affect different pregnancy outcomes in different ways, depending on their sensitivity and their salience. Across all outcomes, but most notably for miscarriages and abortions, reporting ratios increased between 2006-8 and 2011-13. This may reflect changes in sensitivity in reporting of miscarriages and abortions, in the effectiveness of the ACASI, or in willingness to take part in surveys. The ACASI may work differently across time, for different measures, and with varying survey context. Miscarriage also appears to be a sensitive outcome; this finding should be explored further.