ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Tuesday 18th July, 11:00 - 12:30 Room: Q4 ANF1

Researching Sensitive Topics: Improving Theory and Survey Design 2

Chair Dr Ivar Krumpal (University of Leipzig )
Coordinator 1Professor Ben Jann (University of Bern)
Coordinator 2Professor Mark Trappmann (IAB Nürnberg)

Session Details

Social desirability bias is a problem in surveys collecting data on private issues, deviant behavior or unsocial opinions (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia) as soon as the respondents’ true scores differ from social norms. Asking sensitive questions poses a dilemma to survey participants. On the one hand, politeness norms may oblige the respondent to be helpful and cooperative and self-report the sensitive personal information truthfully. On the other hand, the respondent may not trust in his or her data protection and may fear negative consequences from self-reporting norm-violating behavior or opinions. Cumulative empirical evidence shows that in the context of surveying sensitive issues respondents often engage in self-protective behavior, i.e. they either give socially desirable answers or they refuse to answer at all. Such systematic misreporting or nonresponse leads to biased estimates and poor data quality of the entire survey study. Specific data collection approaches were proposed to increase respondents’ cooperation and improve validity of self-reports in sensitive surveys.

This session is about deepening our knowledge of the data generation process and advancing the theoretical basis of the ongoing debate about establishing best practices and designs for surveying sensitive topics. We invite submissions that deal with these problems and/or present potential solutions. In particular, we are interested in studies that (1) reason about the psychological processes and social interactions between the actors that are involved in the collection of the sensitive data; (2) present current empirical research focusing on ‘question-and-answer’ based (e.g. randomized response techniques, factorial surveys), non-reactive (e.g. record linkage approaches, field experiments or administrative data usage) or mixed methods of data collection (e.g. big data analyses in combination with classical survey approaches) focusing on the problem of social desirability; (3) deal with statistical procedures to analyze data generated with special data collection methods; (4) explore the possibilities and limits of integrating new and innovative data collection approaches for sensitive issues in well-established, large-scale population surveys taking into account problems of research ethics and data protection.

Paper Details

1. Estimating anti-immigrant sentiment and social desirability bias: item-counts in a mixed-modes survey
Dr Sebastian Rinken (Institute for Advanced Social Studies (IESA), Spanish Research Council (CSIC))
Dr Sara Pasadas del Amo (Institute for Advanced Social Studies (IESA), Spanish Research Council (CSIC))
Mr Juan Antonio Domínquez (Institute for Advanced Social Studies (IESA), Spanish Research Council (CSIC))

This paper assesses two distinct yet compatible research techniques that are widely supposed to reduce social desirability bias, namely: (a) data collection via self-administered versus interviewer-administered survey modes, and (b) measurement via an indirect gauge (i.e., the item-count technique) versus an explicit questionnaire item. Substantively, the study aims to estimate the prevalence of anti-immigrant sentiment, a notoriously sensitive and bias-prone research topic. In contrast with the bulk of extant scholarship on attitudes toward immigration and immigrants, we consider virulent anti-immigrant sentiment (i.e., generalized antipathy against immigrants) to be qualitatively different from generic qualms about immigration’s impact and management, for example with regard to the labor market; however, we recognize that the prevalence of virulent animosity is prone to be underestimated by explicit questionnaire items. Thus, this paper explores research procedures that hold the promise of avoiding the pitfalls both of sprawling imputations of gratuitous hostility, on one hand, and its exceedingly narrow (for outspoken) measurement, on the other. Hence, we aim to minimize not only the potentially significant share of “false negatives” originated by the latter, but also the “false positives” incurred by expansive notions of prejudice.
Our dataset stems from a combined CAWI-CATI survey (N=1232) conducted in 2016, using a mix-mode probability-based panel recruited and maintained by the Institute for Advanced Social Studies (IESA), a unit of the Spanish National Research Council (CSIC). The questionnaire included an indirect measure of virulent anti-immigrant sentiment, obtained via the list-experiment (item-count technique), and an explicit gauge of the same focal construct. As predicted by extant scholarship, in the whole sample, the former yields a significantly higher estimate of out-group rejection than the latter, by a margin of seven percentage points. Although the size of that divergence is interesting in its own right, our research question derives from the dataset’s mixed-modes design. On the assumption that social desirability bias is due largely, or indeed primarily, to the interviewer’s perceived role as “representative” of moral norms such as tolerance and inclusiveness, explicit measures can be expected to originate significantly higher estimates of animosity, net of other factors, in the self-administered subsample than the interviewer-administered branch (H1). And on the assumption that the interviewer effect diminishes strongly, or even disappears, when unobtrusive gauges are employed, that mode differential should decrease, or even vanish, when using the item-count technique (H2). Since respondents’ sociodemographic profiles vary by survey mode, those gaps are not readily apparent in the raw data; their computation will therefore constitute the paper’s core results.

Ms Rachel Scott (London School of Hygiene and Tropical Medicine)

Abortions are known to be under-reported in surveys. Previous research has found a number of ways in which survey methodology may increase or decrease women’s willingness to disclose abortions.This paper estimates the extent of under-reporting in two nationally-representative population surveys by comparing the survey rates with routine statistics, in order to explore the ways in which survey methodology might influence reporting of abortion. Routine statistics on abortion in Britain are considered to be complete. Two National Surveys of Sexual Attitudes and Lifestyles, conducted in 2000 and 2010 (Natsal-2 and Natsal-3) from Britain are used. These two cross sectional surveys were conducted ten years apart and on the same population, but used different methodologies to collect data on abortion. They therefore enable a limited natural experiment to consider the effect of changing survey methodology on reporting of abortions. In Natsal-2, data on abortion was collected using a direct question: women were asked if they had ever had an abortion, and if so how many, and the time of their last abortion. In Natsal-3, data on abortion was collected using a pregnancy history module. Women were asked how many times they had ever been pregnant, and for each pregnancy in turn they were asked the outcome of the pregnancy and when it ended. There was no evidence of under-reporting in Natsal-2, which collected data on abortion using a direct question. The confidence interval of the abortion rate estimated from the Natsal-2 survey included the rate obtained from national statistics. There was evidence of under-reporting in Natsal-3 which collected data on abortion through a pregnancy history module. The confidence interval of the rate did not include the rate obtained from national statistics; 71% of abortions were reported. A direct question may be more effective in eliciting reports of abortion than a pregnancy history module.

3. A Comparison of Self-reported Sexual Identity Using Direct and Indirect Questioning
Miss Alessandra Gaia (Institute for Social and Economic Resear)

Providing sound statistical information on the lesbian gay or bisexual population is a need to inform policy makers on disadvantage and discrimination suffered by sexual minorities (gay, lesbian and bisexual populations). However, obtaining good quality data is methodologically challenging, as sexuality is one of the most sensitive topics in surveys.

This paper compares the estimates on sexual identity from different protocols; first, it shows the estimated prevalence of the lesbian gay and bisexual population obtained with an indirect questioning method: the “Item Count” indirect questioning Technique (ICT). Second, it compares a protocol involving face-to-face interviewing with a show card (adopted by the Integrated Household Survey, HIS) with a Computer Administered Self-Interview protocol (adopted, among others, by the UKHLS) and with the estimates produced using the “Item Count” technique (ICT).

A slight variation to the Item Count technique (ICT) is implemented, to derive individual level estimates. Thus, within individuals, the estimates obtained with ICT and with direct questions are compared, to determine wich are the socio-demographic groups more likely to misreport in the direct question. Also, the potentilities of this variation to the standard technique are discussed, in terms of feasibility and ethical implications.

The analysis is based on experimental data collected in the UKHLS Innovation Panel. This is a nationally representative dataset of the UK population. The experimental allocation to treatment (ICT, UKHLS and IHS protocol) is randomized.
Results may inform survey practitioners and researchers on best ways to elicit sexual orientation in the UK and may inform data users on the quality of data elicited with the different protocols.