ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 11:00 - 12:30 Room: Q2 AUD2


Measuring and modeling response behavior and response quality in web surveys 2

Chair Ms Carina Cornesse (Mannheim University and GESIS )
Coordinator 1Dr Jean Philippe Décieux (Université du Luxembourg)
Coordinator 2Ms Jessica Herzing (Mannheim University)
Coordinator 3Professor Jochen Mayerl (University of Kaiserslautern)
Coordinator 4Ms Alexandra Mergener (Federal Institute for Vocational Education and Training)
Coordinator 5Mr Philipp Sischka (Université du Luxembourg)

Session Details

Web surveys have become a popular method of data gathering for many reasons, including low costs and the ability to collect data rapidly. Due to the rapid infusion of web surveys and the technological progress, the number of respondents filling out web surveys on the run using mobile devices increases. When answering survey questions on mobile devices, respondents can take short-cuts to the optimal cognitive response processes that are partly caused by external disturbing factors such as time pressure, inattention or presence of other persons. Such a response behavior might introduce additional measurement error and thus influence response quality.

Yet, there are inconclusive results on how the “interview situation” of web surveys can influence response behavior and thus response quality. On one hand, many studies have shown that respondents in web surveys answer questions on personal or sensitive topics more honestly compared to respondents in personal or telephone interviews. This can be explained by the subjective impression of anonymity which is due to the absence of an interviewer. On the other hand, recent studies have shown that missing direct interaction to an interviewer can lead to careless responses and increased satisficing response behavior. Furthermore, web surveys are confronted with high unit and item nonresponse as well as increasing dropouts. In addition, response behavior and response quality of web surveys may correlate with the selectivity of the samples under study and recruitment methods of access panels.

Such ambivalent perspectives on response behavior and response quality of web surveys should be addressed and discussed in this session. When modeling response quality and response behavior, researchers can draw on different measures and correlates, such as paradata (e.g. time stamps, types of devices), respondent profile data (e.g. education, socio-economic background) or survey profile data (e.g. type of survey question, interview situation).

We invite submissions from researchers who analyze response behavior and response quality in web surveys. We especially encourage submissions of papers which include experiments covering the area of response quality in web surveys based on empirical data, and papers that use complex statistical models to identify different respondent types. Furthermore, we are interested in submissions on solutions for response quality issues, e.g. on how researchers can attract attention and motivation of respondents to proceed survey questions and to give valid answers as well as which factors improve or impair answer quality.

Paper Details

1. Who fails and who passes instructed response item attention checks in web surveys?
Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)
Dr Joss Roßmann (GESIS - Leibniz Institute for the Social Sciences)
Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)

Providing high-quality answers requires respondents to devote their attention to completing the questionnaire and, thus, thoroughly assess each question. This is particularly challenging in web surveys, which lack the presence of interviewers who can assess how carefully respondents answer the questions and motivate them to be more attentive if necessary. Inattentiveness can provoke response behavior that is commonly associated with measurement and nonresponse error by only superficially comprehending the question, retrieving semi- or irrelevant information, not properly forming a judgement, or failing in mapping a judgement to the available response options. Consequently, attention checks such as Instructed Response Items (IRI) have been proposed to identify inattentive respondents. An IRI is included as one item in a grid and instructs the respondents to mark a specific response category (e.g., “click strongly agree”). The instruction is not incorporated into the question text but is placed like a label of an item. The present study is focused on IRI attention checks as these (i) are easy to create and implement in a survey, (ii) do not need too much space in a questionnaire (i.e., one item in a grid), (iii) provide a distinct measure of failing or passing the attention check, (iv) are not cognitively demanding, and (v) –most importantly–provide a measure of how thoroughly respondents read items of a grid.
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.


2. Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy on Survey Results
Professor Rainer Schnell (University of Duisburg-Essen)
Mr Leo Panreck (University of Duisburg-Essen)

Selecting respondents for a web survey can be done in many different ways. Choosing a probability approach considerably limits the number of these methods. However, most available web surveys are based on nonprobability samples. The most widespread method for selecting respondents in web surveys is the use of already existing online panels. We explore the effect of different recruitment strategies for online panels on the quality of survey data. We replicated the same questionnaire (which included about 25 non-demographic, factual questions, for which aggregated administrative data was available) with five different German online panel providers. This set of online panels consists of one commercial probability sample (n = 5,000), one academic nonprobability sample (n = 2,500) and three commercial nonprobability samples (each n = 5,000), each using a different method of recruitment. We report on differences in item nonresponse, response styles and other indicators of data quality. Finally, we compare the surveys with aggregated administrative data (beyond demographics) of the same population.


3. Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected by the Interview Situation?
Dr Stefan Niebrügge (INNOFACT AG)

RELEVANCE & RESEARCH QUESTION
Screens are everywhere. And so, of course, are interviews. Market research now happens in real life.
The author emphasizes the importance of the interview and its environment for several reasons: It’s the core of a good research practice. Its costs heavily affect the economic health of research businesses. The fact that we don’t see the actual interview environment might make researchers unaware of potential impacts on the answering behaviour. Panel interviews compete with multiple distractions that come with ubiquitous devices. We can assume that the interview environment is under constant change. Last not least, we need to include in our equation the beginning shift from the interview to the observation.

METHODS & DATA
A survey with a total N = 1.049 provides a comprehensive and representative picture of present-day interview environments. The respondents were free to choose time, place and device. The consistency and commitment to the online interview was measured using a fit statistic from a MaxDiff exercise.

RESULTS
A large share of panel interviews is done at home. Only 2 % of the interviews can be classified as truly mobile (out-of-home, using a mobile data connection). 88 % of the resp. show a 100-%-consistency in their answering behaviour.
The quality of the answering behaviour is largely influenced by non-situational parameters such as the general personality trait of honesty and truthfulness as measured with the HEXACO-60 personality inventory. It’s not or only to a neglectable extent affected by parameters of the actual interview situation. But, there are a few remarkable exceptions such as the consumption of alcohol prior to the interview.

ADDED VALUE
For research designs, it’s key to keep in mind in which environment panel interviews take place. For some research designs that expand the scope from lab situations to the real world the very low share of truly mobile interviews is bad news, whereas results indicate that interview environments are more homogeneous than expected.