ESRA 2019 Programme at a Glance
Sensitive Questions in Surveys: Theory and Methods 2
|Session Organisers|| Dr Ivar Krumpal (University of Leipzig)
Professor Ben Jann (University of Bern)
Professor Mark Trappmann (IAB Nürnberg)
Dr Felix Wolter (University of Mainz)
|Time||Wednesday 17th July, 09:00 - 10:30|
Social desirability bias is a problem in surveys collecting data on private issues, deviant behavior or unsocial opinions (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia) as soon as the respondents’ true scores differ from social norms. Asking sensitive questions poses a dilemma to survey participants. On the one hand, politeness norms may oblige the respondent to be helpful and cooperative and self-report the sensitive personal information truthfully. On the other hand, the respondent may fear negative consequences from self-reporting norm-violating behavior or opinions within a survey setting. Cumulative empirical evidence shows that in the context of surveying sensitive issues respondents often engage in self-protective behavior, i.e. they give socially desirable answers or they refuse to answer at all. Such systematic misreporting or nonresponse leads to biased estimates and poor data quality of the entire survey study. Specific data collection approaches have been proposed to increase respondents’ cooperation and improve validity of self-reports in sensitive surveys. Furthermore, in recent years, web and mobile web technologies as well as big data approaches offer new (non-reactive) perspectives in gathering data on sensitive topics and in tackling social desirability bias.
This session aims at deepening our knowledge of the data generation process and advancing the theoretical basis of the ongoing debate about establishing best practices and designs for surveying sensitive topics. We invite submissions that deal with these problems and/or present potential solutions. In particular, we are interested in studies that (1) reason about the psychological processes and social interactions between the actors that are involved in the collection of the sensitive data; (2) present current empirical research focusing on ‘question-and-answer’ based (e.g. randomized response and item count techniques, factorial surveys and choice experiments), non-reactive (e.g. record linkage approaches, big data analyses, field experiments, or administrative data usage) or mixed methods of data collection (e.g. big data analyses in combination with classical survey approaches) focusing on the problem of social desirability and highlighting best practices regarding recent methodological and technological developments; (3) deal with statistical procedures to analyze data generated with special data collection methods; (4) explore the possibilities and limits of integrating new and innovative data collection approaches for sensitive issues in well-established, large-scale population surveys taking into account problems of research ethics and data protection.
Keywords: Social desirability bias, data validity, response behavior, data collection techniques
Item Count Technique in a Real Study
Miss Beatriz Cobo (University of Granada) - Presenting Author
Mrs Mhairi A. Gibson (University of Bristol)
Mr Eshetu Gurmu (University of Addis Ababa)
Mrs María del Mar Rueda (University of Granada)
Mrs Isabel M. Scott (University of Bristol)
Female genital cutting (FGC) has major implications for women’s physical, sexual and psychological health, and eliminating the practice is a key target for global development policy-makers. Its persistence in the face of longstanding eradication efforts is also of considerable interest to anthropologists. To date the main barrier to achieving this goal has been an inability to infer the privately-held intentions within communities where FGC is prevalent.
As a sensitive topic, people are anticipated to hide their true views and intentions when questioned directly. We use indirect response methods, specifically item count technique, to obtain privately-held views on FGC in a rural Ethiopian community where the practice is common, but declining.
Some results obtained in our study show that both genders express low support for FGC when questioned directly, while indirect methods reveal substantially higher support, particularly with respect to the practice being desirable for daughters-in-laws. Educated people are privately more supportive of the practice than they are prepared to admit publicly. Specifically older, educated men are particularly inclined to conceal their ‘true’ preference for FGC. As this subgroup represents the most influential members of society, their preferences may constitute a primary barrier to eradicating FGC.
Our results demonstrate the inadequacy of traditional, yet widely used, direct questioning methods, and the great potential for indirect techniques to advance policy formation and evaluation for cultural-sensitive topics, like FGC.
A Comprehensive Meta-Analysis of Experimental Survey Studies on the Performance of the Item Count Technique
Dr Felix Wolter (Johannes Gutenberg University Mainz, Department of Sociology)
Mr Justus Junkermann (Johannes Gutenberg University Mainz, Department of Sociology) - Presenting Author
Mr Ingmar Ehler (Institute for Sociology, Johannes Gutenberg University)
Recently, the person count technique (PCT) has been proposed as an advancement of the conventional item count technique (ICT) for asking sensitive questions in surveys. While ICT uses lists of filler questions, PCT makes use of person lists. Respondents indicate for how many people something (sensitive) applies, either including the respondent himself in the (long) list or not (short list). While PCT procedures are easier to implement in surveys as compared to conventional ICT designs (especially if many sensitive questions are being asked), they bring about some methodological challenges due to homophily effects.
The main part of the paper presents empirical evidence on the performance of various PCT procedures, the data stemming from a large-scale CATI survey in Germany (N=3000). First, we present advancements of the basic PCT design in which respondents choose the list of uninvolved persons themselves (yielding homophily effects): The G-PCT (Group-PCT) introduces predetermined groups of uninvolved people (e.g., “a typical democrat voter”). The F-PCT (Fixed-PCT) fixes the uninvolved persons by design (e.g., “Angela Merkel”, “Pope Francis”). Second, we evaluate the classic PCT, G-PCT, and F-PCT by experimentally comparing their prevalence estimates and nonresponse rates with those obtained from conventional direct questioning (DQ). The expectation here is that PCT procedures reduce misreporting and nonresponse caused by social desirability (or other reasons). The sensitive items investigated pertain to various sensitive topics like voting, political attitudes, traffic violations, and petty crimes. For some of the sensitive items investigated, an external validation is possible as aggregate true values are known. For the rest of the items, a more (less) is better- criterion is used for evaluation.
As the fieldwork of the CATI survey is ongoing until the end of 2018, results cannot be reported yet by the time this abstract.
Methodological Advances and Use of Indirect Questioning Techniques in Sensitive Surveys
Professor Pier Francesco Perri (University of Calabria - Department of Economics, Statistics and Finance) - Presenting Author
Dr Beatriz Cobo Rodriguez (University of Granada - Department of Statistics and Operational Research)
Professor Maria del Mar Rueda Garcia (University of Granada - Department of Statistics and Operational Research)
Empirical researches addressing sensitive issues often yield unreliable estimates due to nonresponse and socially desirable responding. Refusal to answer and false answers represent nonsampling errors that are difficult to deal with and can seriously flaw the quality of the data and, thus, jeopardize the usefulness of the collected information for subsequent analyses. Although these errors cannot be totally avoided, they may be mitigated by increasing respondents’ cooperation and assuring survey participants of anonymity and confidentiality. Recently, indirect questioning techniques (IQTs) have grown in popularity as effective methods for eliciting truthful responses to sensitive questions while guaranteeing privacy to respondents. In the course of time, many variants of the original IQTs have been proposed with the aim of enhancing the perceived level of privacy protection, improving the efficiency of estimation process and taking into account more complex survey situations.
The present contribution aims at bringing together methodological developments and practical aspects of different IQTs. Specifically, discussion will cover some recent methodological advances which include the use of calibration estimators and optimal sample size allocation for the item sum technique (ITS), the implementation of the IST when two or more sensitive variables are investigated and multiple sensitive estimates are required, and the bivariate logistic regression extension of two randomized response (RR) mechanisms, say the simple and crossed models.
Practical aspects will be focussed on the discussion of the survey plan and the results of a mixed-mode sensitive survey on cannabis use and sexual addiction conducted among university students in Spain. Three different data-collection methods are considered and compared: direct questioning (DQ), IST and RR. It will be discussed how the IST and RR surveys can enhance respondents’ cooperation and, according to the “more-is-better” assumption, procure more reliable estimates than those stemming from the traditional DQ survey.
Answering Sensitive Questions. Can Indirect Question Techniques Help?
Dr Martina Kroher (Leibniz Universität Hannover) - Presenting Author
In the social sciences many researchers deal with sensitive topics and are – as a result – confronted with participants who do not respond honestly but give instead false or socially desirable answers. For gaining nevertheless valid information indirect question techniques (for example the Randomized Response Technique, the Crosswise Model, the Item Sum Technique) were developed. Over the last decades a series of studies were conducted which dealt with the applicability and validity of these question techniques, yielding mixed evidence.
The present study tests the effect of the Randomized Response Technique (RRT) and the Crosswise Model (CM) compared to answers gained by direct questioning (DQ). Using an online survey we asked German college students about their academic cheating behavior. About 20,000 respondents were randomly assigned to one of the three question techniques containing questions about misbehavior in exams and term paper.
First results of the direct questions indicate that 47.8% of the respondents admit any form of cheating at least once in their academic career. Comparing the prevalence rates of RRT and CM with DQ, results show more honest answers for the two indirect question techniques, whereby both yield similar rates.
Finally, it must be considered that there is evidence that higher prevalence rates can be caused by false positives (overreporting). This fact will be discussed at the end of the presentation.