ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 14:00 - 15:30 Room: Q4 ANF1


Researching Sensitive Topics: Improving Theory and Survey Design 3

Chair Dr Ivar Krumpal (University of Leipzig )
Coordinator 1Professor Ben Jann (University of Bern)
Coordinator 2Professor Mark Trappmann (IAB Nürnberg)

Session Details

Social desirability bias is a problem in surveys collecting data on private issues, deviant behavior or unsocial opinions (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia) as soon as the respondents’ true scores differ from social norms. Asking sensitive questions poses a dilemma to survey participants. On the one hand, politeness norms may oblige the respondent to be helpful and cooperative and self-report the sensitive personal information truthfully. On the other hand, the respondent may not trust in his or her data protection and may fear negative consequences from self-reporting norm-violating behavior or opinions. Cumulative empirical evidence shows that in the context of surveying sensitive issues respondents often engage in self-protective behavior, i.e. they either give socially desirable answers or they refuse to answer at all. Such systematic misreporting or nonresponse leads to biased estimates and poor data quality of the entire survey study. Specific data collection approaches were proposed to increase respondents’ cooperation and improve validity of self-reports in sensitive surveys.

This session is about deepening our knowledge of the data generation process and advancing the theoretical basis of the ongoing debate about establishing best practices and designs for surveying sensitive topics. We invite submissions that deal with these problems and/or present potential solutions. In particular, we are interested in studies that (1) reason about the psychological processes and social interactions between the actors that are involved in the collection of the sensitive data; (2) present current empirical research focusing on ‘question-and-answer’ based (e.g. randomized response techniques, factorial surveys), non-reactive (e.g. record linkage approaches, field experiments or administrative data usage) or mixed methods of data collection (e.g. big data analyses in combination with classical survey approaches) focusing on the problem of social desirability; (3) deal with statistical procedures to analyze data generated with special data collection methods; (4) explore the possibilities and limits of integrating new and innovative data collection approaches for sensitive issues in well-established, large-scale population surveys taking into account problems of research ethics and data protection.

Paper Details

1. Protection of Privacy in the Item Count Technique
Professor Tasos Christofides (University of Cyprus)
Miss Eleni Manoli (University of Cyprus)

It is widely accepted that when dealing with sensitive or stigmatizing issues, conventional survey methodology techniques fail to produce reliable estimates because as expected, people often refuse to participate. Even in cases they agree to participate they provide untruthful responses. Indirect questioning techniques have been devised so that reliable estimates can be produced and at the same time the privacy of the participants is protected. One such indirect questioning technique, the Item Count Technique, has been the focus of research activity during the last few years. The original version of the technique, introduced by Raghavarao and Federer (1979) and Miller (1984) does not fully protect the privacy of the participants. As a result, some alternative versions have been devised in order to remedy the problem. In this presentation, we examine and compare these alternatives using various privacy protection criteria. We also, discuss the perceived protection of privacy, i.e., the protection of privacy from the respondent’s point of view.


2. A Recent Advancement of the List Experiment for Asking Sensitive Questions in Surveys: Empirical Evidence on the Performance of the Person-Count-Technique
Dr Felix Wolter (Johannes Gutenberg University Mainz, Department of Sociology)

The paper examines the effectiveness of the person-count-technique (PCT) for eliciting valid answers to sensitive questions in surveys. The PCT has just recently been proposed by Grant et al. (2012, 2014) as a new variant of the list experiment (LI, aka item-count-technique or unmatched-count-technique). The strategy of both LI and PCT consists in anonymizing the interview situation by concealing the respondents’ answers, which in turn is expected to yield more honest answers to sensitive questions than conventional direct questioning (DQ). While the standard LI employs lists of items (filler items plus the sensitive one) for which respondents indicate the number of items that apply and in so doing concealing their answer to a sensitive item, PCT uses lists of persons. While PCT is easier to implement and to handle both for researches and respondents than the standard LI, the new design brings about some methodological challenges like floor- and ceiling effects.

These design aspects and some general pros and cons as compared to the standard LI are discussed in the first part of the paper. The second part of the paper presents empirical evidence on the performance of PCT as compared to standard DQ. Besides the original study by Grant et al., this is the first empirical assessment of the performance of PCT. The data of the analyses stem from a postal survey (N = 571) conducted in Mainz, Germany, using an experimental design in order to compare DQ and PCT. In the survey, four sensitive questions about attitudes to refugees were asked. Existing research shows that respondents tend to underreport anti-refugee attitudes in surveys, so the general hypothesis is that prevalence estimates of anti-refugee attitudes are higher in PCT than in DQ question mode.

The main findings of the analyses are that PCT is a viable method for asking sensitive questions in surveys. Estimates of anti-refugee attitudes are significantly higher for one of the four items and non-significantly higher for the 3 remaining items. All in all, PCT is a variant of list experiments that deserves future consideration on the one hand, but also needs further research on statistical issues and design aspects, and on best practices of implementing the technique on the other hand.


3. An Enhanced Item Sum Design for Measuring Quantitative Sensitive Behaviors
Dr Ivar Krumpal (University of Leipzig)
Professor Ben Jann (University of Bern)
Dr Martin Korndörfer (University of Leipzig)
Professor Stefan Schmukle (University of Leipzig)

Social desirability bias is a problem in surveys collecting data on private issues (e.g. sex, health, income) as soon as the respondent’s true status differs from social norms. Respondents often engage in self-protective behavior, i.e. they either give socially desirable answers or they refuse to answer at all. Such systematic misreporting or nonresponse leads to biased estimates and poor data quality of the entire survey study. This study proposes an optimized item sum design for the measurement of quantitative sensitive characteristics. Compared to the approach recently proposed by Trappmann, Krumpal, Kirchner & Jann (2014), our method requires a smaller sample size to achieve a given level of statistical power. We theoretically describe our design and explore its practical viability in the context of a large-scale experimental online survey in the Netherlands in which we asked sensitive questions about the respondents’ extent of pornography watching and their lifetime numbers of sexual partners. We conclude by discussing the limitations of our empirical study and outlining possibilities for follow-up research.


4. Multiple sensitive estimation and optimal sample allocation in the item sum technique
Miss Beatriz Cobo Rodríguez (University of Granada)
Mr Pier Francesco Perri (University of Calabria)
Mrs María del Mar Rueda García (University of Granada)

Studies addressing sensitive issues often yield unreliable estimates due to nonresponse and socially desirable responding. Refusal to answer and false answers constitute non sampling errors that are difficult to deal with and can seriously flaw the quality of the data and, thus, jeopardize the usefulness of the data for subsequent analyses. Although these errors cannot be totally avoided, they may be mitigated by increasing respondent cooperation mostly assuring about anonymity and confidentiality.
Recently, the indirect questioning techniques have grown in popularity as effective methods for eliciting truthful responses to sensitive questions while guaranteeing privacy to respondents. In particular, the item sum technique (IST) is a new approach derived from the item count technique to elicit sensitive information on a quantitative variable and obtain estimates of parameters of interest.
In its original form, the technique requires the selection of two independent samples. Units belonging to one of the two samples are presented with a long list (LL) of items containing a sensitive question and G innocuous questions; units of the other sample receive only a short list (SL) of items consisting of the same non sensitive questions in the long list. All the questions refer to quantitative variables measured on the same scale of the sensitive one. The respondents are asked to report the total score of the answers to all the questions in their list without revealing the individual score of each question. The mean difference of answers between the LL-sample and the SL-sample is used as an unbiased estimator of the population mean of the sensitive variable.
This work deals with two important questions concerning the IST and produces some methodological advances that may be useful in doing sensitive research. One concern is how to split the total sample size in the LL- and SL-sample. A simple solution is to allocate the same number of units in each sample irrespective of the variability of the items in the two lists. Nonetheless, this intuitive and basic solution turns out to be inefficient in the sense that estimates may be affected by high variability. Alternatively, optimal sample size allocation may be pursed by minimizing the variance of the estimates under a budget constrain.
In many real situations, more sensitive issues are to be surveyed and, in this case, the problem of multiple sensitive estimates has to be considered. We therefore discuss three different approaches to implement the IST showing pros and cons and comparing the efficiency of the estimates stemming from them under IST optimal allocation.
Theoretical results for multiple estimation and optimal allocation are firstly derived under a generic sampling design and, then, particularized to simple random sampling and stratified sampling. Hence, a number of simulation experiments - based on the Survey of Household Income and Wealth by Bank of Italy and on real data from a survey on the use of cannabis in the University of Granada - are carried out to investigate the performance of the optimal allocation for single and multiple sensitive estimations under different scenarios.