ESRA 2019 Programme at a Glance

Sensitive Questions in Surveys: Theory and Methods 1

Session Organisers Dr Ivar Krumpal (University of Leipzig)
Professor Ben Jann (University of Bern)
Professor Mark Trappmann (IAB Nürnberg)
Dr Felix Wolter (University of Mainz)
TimeTuesday 16th July, 16:00 - 17:00
Room D01

Social desirability bias is a problem in surveys collecting data on private issues, deviant behavior or unsocial opinions (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia) as soon as the respondents’ true scores differ from social norms. Asking sensitive questions poses a dilemma to survey participants. On the one hand, politeness norms may oblige the respondent to be helpful and cooperative and self-report the sensitive personal information truthfully. On the other hand, the respondent may fear negative consequences from self-reporting norm-violating behavior or opinions within a survey setting. Cumulative empirical evidence shows that in the context of surveying sensitive issues respondents often engage in self-protective behavior, i.e. they give socially desirable answers or they refuse to answer at all. Such systematic misreporting or nonresponse leads to biased estimates and poor data quality of the entire survey study. Specific data collection approaches have been proposed to increase respondents’ cooperation and improve validity of self-reports in sensitive surveys. Furthermore, in recent years, web and mobile web technologies as well as big data approaches offer new (non-reactive) perspectives in gathering data on sensitive topics and in tackling social desirability bias.

This session aims at deepening our knowledge of the data generation process and advancing the theoretical basis of the ongoing debate about establishing best practices and designs for surveying sensitive topics. We invite submissions that deal with these problems and/or present potential solutions. In particular, we are interested in studies that (1) reason about the psychological processes and social interactions between the actors that are involved in the collection of the sensitive data; (2) present current empirical research focusing on ‘question-and-answer’ based (e.g. randomized response and item count techniques, factorial surveys and choice experiments), non-reactive (e.g. record linkage approaches, big data analyses, field experiments, or administrative data usage) or mixed methods of data collection (e.g. big data analyses in combination with classical survey approaches) focusing on the problem of social desirability and highlighting best practices regarding recent methodological and technological developments; (3) deal with statistical procedures to analyze data generated with special data collection methods; (4) explore the possibilities and limits of integrating new and innovative data collection approaches for sensitive issues in well-established, large-scale population surveys taking into account problems of research ethics and data protection.

Keywords: Social desirability bias, data validity, response behavior, data collection techniques

How to Measure Age Discrimination in Recruitment - Linking Stereotypes, Skill Requirements and Employability

Dr Konrad Turek ((1) Netherlands Interdisciplinary Demographic Institute; (2) Jagiellonian University (Poland)) - Presenting Author
Professor Kene Henkens (Netherlands Interdisciplinary Demographic Institute; University Medical Center Groningen; University of Amsterdam)

Studies on age discrimination and age stereotypes are increasingly popular in the ageing world. Particularly important and challenging is gathering reliable information from employers who are the key actors in shaping labour market situation of older people. It is widely argued that the low employability of older people is triggered by employers’ age stereotypes, however, the evidence for this thesis is surprisingly weak. Approaches represented so far have their limitations. Employers’ opinion surveys show only a link between general attitudes and recruitment intentions regarding an abstract category of workers, with no clear implication for real-life behaviours. Quasi-experimental questionnaires (e.g. vignettes) present hypothetical framework and suffer from low ecological validity. Recruitments arranged as field experiments provide reliable proof of age selection in real-life contexts, but can mostly only hypothesise why it occurred.
We discuss a more realistic approach of investigating the role of age stereotypes for employability. We shift emphasis from stereotypes to skill requirements during recruitment, and focus on employers’ decision processes during which candidates’ fit to requirements is considered. Data came from five waves of a representative employers’ survey in Poland (N2010–2014=80,017). Employers looking for workers described the positions offered and their actual requirements, including skills and age requirements. Using mixed logit models, we analyse the likelihood of recruiting people 50+, conditional on a range of skill requirements. This realistic framework (i.e., not abstract or hypothetical) enhances ecological validity of the research. We also use large-scale data that show a representative image of labour demand at the scale of an entire national labour market. By avoiding direct opinions about skills of younger and older workers, we limit response biases related to reporting socially correct opinions. Results then provide more robust empirical evidence of mechanisms that link age-based skill stereotypes and the employability of older candidates.

Detecting Ethnic Discrimination with Vignette Experiments? A Field Validation of Landlords' Intentions to Invite Arab Prospective Tenants

Dr Knut Petzold (Ruhr-Universität Bochum, Sociology Section) - Presenting Author

Vignette experiments are widely used in the social sciences due to the potential to reach results delivering both high internal validity and a high external validity. It is particularly assumed that vignettes reduce social desirability as indirect measurement is less strict related to the self-image of the respondent so that the results are less influenced by tendencies of self-expression. Accordingly, vignettes are increasingly applied to investigate sensitive topics such as attitudes towards ethnic minorities. However, one could argue that because they rely on self-reports, vignettes are still as prone to socially desirable response behaviour.

In the present validation study, it is critically investigated to what extent vignettes provide valid measures of sensitive intents to discriminate ethnic minorities. For this purpose, in a first step, an unobtrusive field experiment was carried out in the German housing market. 884 suppliers of rental apartments were each sent two appointment enquiries from interested applicants with Arabic and German names. The difference in invitations provides a valid measure of ethnic discrimination. In a second step, a follow-up survey was conducted among 75 of these landlords to whom a total of 585 vignettes with hypothetical applicants were presented showing the same characteristics like in the field experiment. Whether they would respond to applicants’ enquiries serves as measure of their discrimination intents.

The design allows for a within-comparison of self-reported responses with observed behaviour, while time constant unobserved heterogeneity is controlled for. Particular individual over- and underreporting can be explored. Though the distributions of intended and real invitations differ across both studies, discrimination of Arabic applicants can principally be replicated in the vignette experiment. Findings indicate that vignettes may provide valid measures of discrimination intents, though they seem still being biased through social desirability. The outreach of the study and general methodological considerations are finally discussed.