ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 09:00 - 10:30 Room: Q2 AUD2


Measuring and modeling response behavior and response quality in web surveys 1

Chair Ms Carina Cornesse (Mannheim University and GESIS )
Coordinator 1Dr Jean Philippe Décieux (Université du Luxembourg)
Coordinator 2Ms Jessica Herzing (Mannheim University)
Coordinator 3Professor Jochen Mayerl (University of Kaiserslautern)
Coordinator 4Ms Alexandra Mergener (Federal Institute for Vocational Education and Training)
Coordinator 5Mr Philipp Sischka (Université du Luxembourg)

Session Details

Web surveys have become a popular method of data gathering for many reasons, including low costs and the ability to collect data rapidly. Due to the rapid infusion of web surveys and the technological progress, the number of respondents filling out web surveys on the run using mobile devices increases. When answering survey questions on mobile devices, respondents can take short-cuts to the optimal cognitive response processes that are partly caused by external disturbing factors such as time pressure, inattention or presence of other persons. Such a response behavior might introduce additional measurement error and thus influence response quality.

Yet, there are inconclusive results on how the “interview situation” of web surveys can influence response behavior and thus response quality. On one hand, many studies have shown that respondents in web surveys answer questions on personal or sensitive topics more honestly compared to respondents in personal or telephone interviews. This can be explained by the subjective impression of anonymity which is due to the absence of an interviewer. On the other hand, recent studies have shown that missing direct interaction to an interviewer can lead to careless responses and increased satisficing response behavior. Furthermore, web surveys are confronted with high unit and item nonresponse as well as increasing dropouts. In addition, response behavior and response quality of web surveys may correlate with the selectivity of the samples under study and recruitment methods of access panels.

Such ambivalent perspectives on response behavior and response quality of web surveys should be addressed and discussed in this session. When modeling response quality and response behavior, researchers can draw on different measures and correlates, such as paradata (e.g. time stamps, types of devices), respondent profile data (e.g. education, socio-economic background) or survey profile data (e.g. type of survey question, interview situation).

We invite submissions from researchers who analyze response behavior and response quality in web surveys. We especially encourage submissions of papers which include experiments covering the area of response quality in web surveys based on empirical data, and papers that use complex statistical models to identify different respondent types. Furthermore, we are interested in submissions on solutions for response quality issues, e.g. on how researchers can attract attention and motivation of respondents to proceed survey questions and to give valid answers as well as which factors improve or impair answer quality.

Paper Details

1. Predicting Breakoffs in Web Surveys
Mrs Felicitas Mittereder (University of Michigan, ISR)
Mr Brady West (University of Michigan, ISR)

Due to recent general shifts in survey data collection modes from mail to web, respondents who break off from a web survey prior to completing it have become a more prevalent problem in data collection. Given the (already) lower response rate in web surveys compared to more traditional modes, it is crucial to keep as many diverse respondents in the web survey as possible to prevent breakoff bias, maintaining high data quality and producing accurate survey estimates. As a first step of preventing and reducing breakoffs, this study aims to predict breakoff timing on a question level. We analyze data from an annual online survey on sustainability conducted by the Institute for Social Research at the University of Michigan. This study will make use of survey data, along with rich paradata and accessible administrative information from the sampling frame. In addition to well-known factors associated with breakoffs such as answering device (e.g. mobile vs. PC) we investigate previous response behavior like speeding and item nonresponse to predict breakoff probability for each respondent on a question level using logistic regression and survival analyses.


2. Do distractions during web survey completion affect data quality? Findings from a laboratory experiment
Mr Alexander Wenz (University of Essex)

Web surveys are increasingly considered as cost-effective mode of data collection for large-scale social surveys. Existing face-to-face surveys introduce mixed-mode approaches including web, and a number of probability-based online panels have recently been established in the United States and Europe. In contrast to interviewer-administered surveys, however, survey researchers lose control over the environment in which respondents complete the survey. Web respondents can decide when and where to fill in the questionnaire, and might be exposed to various sources of distractions or might choose to get involved in other activities while filling in the questionnaire. In particular, respondents who use a mobile device might be in distracting environments, where other people are present. Distractions and multi-tasking are potential threats to data quality as respondents might not be able to fully concentrate on the survey task but might rely on cognitive shortcuts.
This paper reports on results from a laboratory experiment that is being conducted in November 2016 to examine how distractions during web survey completion influence data quality, and to identify if the environment of survey completion is a potential source of measurement error.
Subjects (N = 276) are randomly assigned to experimental groups using a 3 (form of distraction) x 2 (device type) design and are asked to complete an online survey. The three forms of distraction are music versus conversation between other people in the room versus no distraction, and the two levels for device type are PC versus tablet. Distractions were chosen to represent two sources of distractions that are likely to occur in web survey settings.
I will examine the effects of distraction and device type on various data quality measures, including item-nonresponse, straight-lining, extreme response styles, response consistency, survey duration, and responses to an Instructional Manipulation Check.
This paper adds to research on how the environment in which respondents fill in questionnaires affects response quality in web surveys.


3. Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance for data quality
Mrs Daniela Wetzlehütter (University of Applied Sciences Upper Austria)

Starting point and focus: It is not possible to ignore the internet as a quick, practicable and economic source of information and nearly unlimited communication channel, as a mass medium (online news), a mainstream medium (social media) as well as an individual medium (email). The number of web surveys and methods of taking web surveys increased with the utilisation of the internet. For instance, the Arbeitskreis Deutscher Markt- und Sozialforschungsinstitute e.V. recorded a continuous increase from 1% quantitative web surveys of their members in 1998 to 16% in 2004, 38% in 2010 and 43% in 2014. However, webbased surveys – as extensive discussions show – are not free of controversy. Questable data quality, typically regarding the representativeness of the data (coverage error / missing data) and difficulties to achieve unbiased responses (measurement errors) caused by the equipment used (mode-effects) is more and more common. Errors caused by continuous rising proportions of drop-outs and item-nonresponses in online surveys, are relevant in almost the same manner. However, these sources of error are repeatedly neglected to a certain degree.
As the starting point of the paper, it is assumed that drop-out rates and item-nonresponse rates in online surveys differ as context-sensitive (whether at home or not and using a smart-phone or not) response behaviour. This means that systematic errors linked to the interview situation (in terms of location and device) are conceivable. Respectively, the presentation aims to illustrate, how/to what extend the context of the interview situation has to be considered for data cleansing and analysis of data captured online to avoid, as far as possible, biased results.
Methods and Data: To test this assumption, an online survey about “participation of university students” is used. To provoke drop-outs on the one hand and on the other hand test the consequences of different motivation strategies (prospect of profit, appeals, manipulation of the progress bar) that are easily inserted and therefore often used in online surveys, an experimental design was applied. For this purpose, an unusually long questionnaire (23 online-pages, 121 items) was developed, wherein different motivation strategies were included. 14.2% of the students (n=17,491) invited to take part in the survey reacted to the invitation, 1916 (11%) answered at least one question; just 7.3% (n=1282) reached the final page.
Results: Drop-out-rates and item-nonresponse-rates differ, depending on the above specified survey context: not being at home and using a smart-phone increases both. The motivation strategies used work differently: they solely reduce the risk of non-responses of those who did not use a smart-phone while at home. However, data cleansing does not affect the sample composition concerning studyrelated characteristics. Detailed analyses show that the influence of the defined survey context on substantial findings varies. Based on this the presentation will emphasize the importance of recording and considering the context-information of data collection for data cleansing, analysis and interpretation of results and will discuss how this