ESRA logo

ESRA 2021 Program at a glance



Sensitive Questions in Surveys: Theory and Methods

Session Organisers Dr Felix Wolter (University of Konstanz, Germany)
Professor Jochen Mayerl (Chemnitz University of Technology)
Professor Mark Trappmann (IAB, Institute for Employment Research)
TimeFriday 2 July, 16:45 - 18:00

Misreporting to sensitive survey questions is an age-old problem in survey methodology. Empirical evidence has shown that survey respondents tend to engage in self-protective behavior when it comes to questions on private issues, deviant behavior, or unsocial attitudes (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia). This leads to biased estimates and poor data quality of the entire survey study. Although a large body of methodological literature addressing these issues does exist, many questions are still open.

This session aims at deepening our knowledge of the data generation process and advancing the theoretical basis of the ongoing debate about establishing best practices and designs for surveying sensitive topics in order to tackle the problem of response bias.

Keywords: sensitive questions, misreporting, response bias, social desirability

A Meta-Analysis of Studies on the Performance of the Crosswise Model

Professor Rainer Schnell (University Duisburg-Essen) - Presenting Author
Dr Kathrin Thomas (University of Aberdeen)

We present a meta-analysis of all published studies using the Crosswise Model (CM) in estimating the prevalence of sensitive characteristics in different samples and populations. On a data set of 141 items published in 33 either articles or books, we compare the difference between estimates based on the CM and a direct question (DQ). The overall effect size of D is 4.88 [95% CI: 4.56, 5.21]. The results of a meta-regression indicate that difference between DQ and CM is smaller when general populations are considered. The population effect suggests an education effect: Differences between the CM and DQ estimates are more likely to occur when highly educated populations, such as students, are studied. Our findings raise concerns to what extent the CM is able to improve estimates of sensitive behaviour in general population samples.


Collecting data on misuse of short time work in a longitudinal survey using the crosswise method

Dr Mario Bossler (Institute for Employment Research (IAB))
Dr Christopher Osiander (IAB)
Mrs Julia Schmidtke (IAB)
Professor Mark Trappmann (IAB) - Presenting Author

Short time work is a policy instrument that reduces working time of employees temporarily to prevent lay-offs in a crisis when the respective employers face temporal drops in revenues. Employees of companies that have announced and are approved to apply short-time work are eligible to allowances, which pay a compensation of 60 to 67 percent of their net income losses from short time work. The Federal Employment Agency pays the short time work allowances from the unemployment insurance funds. Short time work is widely used in the current Covid-19 crisis and is ascribed to have saved many jobs in Germany in the 2009 financial crisis (Möller 2010). However, there is an often expressed concern about significant amounts of different types of misuse and free-rider effects (Eichhorst & Marx 2009) as these are almost impossible to control.

In order to get first estimates of the amount of this misuse we included questions in waves 6 (November 2020/December 2021) and 7 (January 2021/February 2021) of the IAB High-Frequency Online Personal Panel (IAB-HOPP) "Life and employment in times of Corona". The HOPP panel (Haas et al. 2021) is an online panel survey of initially 11,575 respondents from a random sample of those who were registered in the data of the Federal Employment Agency as either in employment subject to social insurance contributions, unemployed, job seekers or benefit recipients.
We asked for three types of misuse: a) Working more actual hours than declared, b) receiving short time work allowances while the amount of work had not changed or c) receiving short time work allowances while a subsequent layoff was already foreseeable. We used either direct questioning or the crosswise method, where individuals are randomly assigned to one of the two types of questions in two consecutive survey waves.

This allows us (under certain assumptions) not only to give estimates of the amount of misuse for all three indicators and both methods, but also allows us to answer methodological questions about method effects, order effects, reliability or the applicability of model assumptions of the crosswise model. Data will be available in April 2021. We will present results on the estimated prevalence of misuse as well as on these methodological questions.


False Positives and the “More-is-Better” Assumption in Sensitive Question Research: New Evidence on the Crosswise Model and the Item Count Technique

Dr Felix Wolter (University of Konstanz) - Presenting Author
Professor Andreas Diekmann (ETH Zurich and University of Leipzig)

Several special questioning techniques have been developed in order to counteract misreporting to sensitive survey questions, e.g., on criminal behavior. However, doubts have been raised concerning their validity and practical value as well as the strategy of testing their validity using the “more-is-better” assumption in comparative survey experiments. This is because such techniques can be prone to generating false positive estimates, i.e., counting “innocent” respondents as “guilty” ones.

This article investigates the occurrence of false positive estimates by comparing direct questioning, the crosswise model (CM), and the item count technique (ICT). We analyze data from two online surveys (N = 2,607 and 3,203) carried out in Germany and Switzerland. Respondents answered three questions regarding traits for which it is known that their prevalence in reality is zero.

The results show that CM suffers more from false positive estimates than ICT. CM estimates amount to up to 15 percent for a given true value of zero. The mean of the ICT estimates is not significantly different from zero. We further examine factors causing the biased estimates of CM and show that speeding through the questionnaire (random answering) and problems with the measurement procedure – namely regarding the unrelated questions – are responsible.
Our findings suggest that CM is problematic and should not be used or evaluated without the possibility of accounting for false positives. For ICT, the issue is less severe.


Using fictitious issues to investigate satisficing behaviour in surveys

Mr Henrik Andersen (Chemnitz University of Technology) - Presenting Author
Professor Jochen Mayerl (Chemnitz University of Technology)
Dr Felix Wolter (University of Konstanz)
Mr Justus Junkermann (Martin-Luther-Universität Halle-Wittenberg)

Download presentation

In surveys, one way to assess the quality of the data is to look at indicators of the effort that went into a response. Respondents that are currently motivated and have the opportunity for deliberate thought (i.e., optimizers) likely give qualitatively ‘better’ answers than those that are currently unmotivated, or lack the opportunity to deliberate on their response (i.e., satisficers, or even mindless reporters).

There are a number of possibilities for assessing the motivation and opportunity, and thus the effort, put into the survey. For example, we can ask the respondents directly, but there is a good chance they will not tell us the truth. I.e., in the context of social desirability, it is likely seen as undesirable to tell the interviewer that they are providing poor quality responses. Or, we could look at the speed at which the respondent completed the survey, with the assumption that respondents that took longer also put more effort into their answers.

However, if unmotivated respondents tend to be more easily distracted, then the speed with which they completed the survey will be an inaccurate indicator of the quality of the responses. Further, both of these options assume that effort is constant throughout the survey. Surely the contents of specific questions will influence the effort the respondent is willing to expend answering them.

Response latencies measured for each question get around problematic assumption of constant motivation and opportunity, but it is difficult to be sure that, say, fast responses indicate low effort. Namely, salient attitudes and opinions possess high predictive validity (lead with greater probability to attitude-conform behaviour than nonsalient attitudes) and they can be expressed quickly. Thus, quick responses could be an indicator of either highly salient attitudes (indicating high quality data) or low effort responses (low quality).

We propose using fictitious issues, i.e., questions about non-existent or highly obscure topics, to rule out the possibility of salient attitudes. Fast responses to a fictitious issue can therefore only really be an indicator of low effort. We examine substantive responses to fictitious issues using multilevel logistic regression models.
We find that for most fictitious issues, substantive responses follow a u-shaped curve when plotted against response latencies. Both very fast responses (< 5 sec.) and very slow (> 25 sec.) predict substantive answers or ‘pseudo-opinions’, as they are sometimes called. We take this as evidence that satisficers in fact give poor quality data, reporting opinions and attitudes on topics they know little about. However, optimizers also tend to give pseudo-opinions, which is in line with the ‘imputed meaning’ hypothesis, i.e., they take cues from the question to generate an ad hoc judgment. This could be seen as a positive because for optimizers, substantive answers to obscure topics are valid measures of more generalized attitude objects, or as a negative if the interest lies in specific rather than generalized attitudes. Finally, we compare the findings with answers on existent topics and suggest such comparisons could help survey researchers