Program at a glance 2021



Decreasing measurement error for web-surveys

Session Organisers Dr Verena Ortmanns (GESIS - Leibniz Institute for the Social Sciences)
Dr Ranjit Singh (GESIS - Leibniz Institute for the Social Sciences)
Ms Patricia Hadler (GESIS - Leibniz Institute for the Social Sciences)
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
TimeFriday 9 July, 16:45 - 18:00

Technological change and the increasing level of digitalization are transforming survey research practice. Nevertheless, collecting high-quality data remains a central aim for survey research. And so the fight against errors of measurement and representation enters a new arena. This session aims to present current research on different approaches to decreasing measurement errors in web surveys. The talks in this session cover a broad range of issues, such as data quality in probability and non-probability online panels, the potential of paradata to detect measurement error, questionnaire design and linguistic aspects in web surveys, and mode effects on sensitive questions. The session is of interest to both researchers involved in large survey programs and those interested in fielding smaller web surveys, and to all researchers who want to assess the data quality of data collected via web surveys.

Keywords: web-surveys, data qualitiy, technological change

Using ‘Simple Language’ in Web Surveys

Ms Irina Bauer (GESIS Leibniz Institute for the Social Sciences) - Presenting Author
Dr Tanja Kunz (GESIS Leibniz Institute for the Social Sciences)
Dr Tobias Gummer (GESIS Leibniz Institute for the Social Sciences)

Comprehending survey questions is an essential step in the cognitive response process that respondents go through when answering questions. Respondents who have difficulty understanding survey questions may not answer at all, drop out of the survey, give random answers, or take shortcuts in the cognitive response process – all of which can decrease data quality. Comprehension problems are especially likely among respondents with low literacy skills. The 2018 LEO survey estimates the proportion of low literacy among the population in Germany at 12 percent. ‘Simple Language’ in terms of clear, concise, and uncomplicated language for survey questions may help mitigate comprehension problems and thus increase data quality. ‘Simple Language’ is a linguistically simplified version of standard language and is characterized by short and concise sentences with a simple syntax avoiding foreign words, metaphors or abstract concepts. In order to investigate the impact of ‘Simple Language’ on data quality, we conducted a web survey of 10 minutes length among 4,000 respondents of an online access panel in Germany. Respondents were given a questionnaire that used either ‘Simple Language’ or ‘Standard Language’. The assignment to one of the two groups was made randomly. We examine various indicators of data quality, including “don’t know” responses and nondifferentiation. In addition, we investigate various aspects of respondents’ survey assessment. Since data collection has just been completed in December, results are not available yet. However, we assume the use of ‘Simple Language’ to have a positive effect on data quality and survey assessment. We expect this effect to be especially pronounced for subgroups that are more likely to be of low literacy, such as people with a lower level of formal education or those who have a native language other than German.


The effects of the number of items per screen in mixed-device web surveys

Mr Tobias Baier (Darmstadt University of Technology) - Presenting Author
Professor Marek Fuchs (Darmstadt University of Technology)

When applying multi-item rating scales in web surveys, a key design choice is to decide the number of items that are presented on a single screen. Research suggests that it may be preferable to restrict the number of items that are presented on a single screen and instead increase the number of pages (Grady, Greenspan & Liu 2018, Roßmann, Gummer, & Silber, 2017, Toepoel et al., 2009). A lower number of screens by grouping items makes answering questions faster, however, this might only be advantageous to certain amount of items until this type of layout comes with an increased cognitive burden due to a high visual load (Couper et al., 2013). In the case of mixed-deviced web surveys, multi-item rating scales are typically presented in a matrix format for large screens such as PCs and a vertical item-by-item format for small screen such as smartphones (Revilla, Toninelli & Ochoa 2017). The research question of this paper is whether decreasing the number of items per screen (at the expense of more survey pages) is beneficial for both the matrix format on a PC and the item-by-item format on a smartphone. For PC respondents, splitting up a matrix over several pages is expected to counteract respondents using cognitive shortcuts (satisficing behavior) due to a lower visual load as compared to one large matrix on a single screen. Smartphone respondents who receive the item-by-item format do not experience a high visual load even if all items are on a single screen (as only a few items are visible at the same time), however, they have to undergo more extensive scrolling that is supposed to come with a higher degree of fatigue (and therefore, a higher risk of using cognitive shortcuts) as compared to the presentation of fever items on more screens. To investigate the effects of the number of pages for a given number of items we will field a survey panel members of the non-probability online panel of respondi in the spring of 2021. Respondents will be randomly assigned to a device type to use for survey completion (with the goal to obtain about 750 respondents per device type) and three experimental conditions that vary the presentation of several rating scales (with all items either on a single screen or presented over several screens). Satisficing behavior will be assessed with respect to speeding, drop-out rates, item nonresponse, straightlining, and non-differentiation.


Using paradata to measure and improve data quality in web surveys: experimental assessment of difference between satisficing and optimizing behaviour

Mr Daniil Lebedev (National Research University Higher School of Economics) - Presenting Author

The widespread use of online methods of data collection allows to collect and analyse paradata - information obtained in the process of data collection, including records about the characteristics / behavior of the interviewer and the respondent, as well as the situation of the interview as a whole. Research in the use of paradata in terms of evaluating and improving the survey data quality lacks a structure with connection of all available types of paradata with possible erroneous situations during survey completion process. In practice, researchers tend to choose a separate analysis of different paradata types, which narrows the possibilities of assessing and reducing measurement error. The research question is as follows: how different types of paradata and their combinations can be used to evaluate and reduce measurement error within web surveys?
In this paper we present results of web experiment with two experimental groups. Participants were asked to either fill out the online survey as quickly as possible, with low motivation to provide accurate data (“satisficing” experimental condition) or fill out the survey most accurately (“optimizing”). For each participant a wide range of paradata was collected during survvey completion process including mouse movements, change of browser focus, latency and others with the use of One Click Survey web software with beta version of advanced paradata collection tools (www.1ka.si/d/en). As a result, 97 students participated allowing to compare the data quality and explore the possibilities of paradata employment in particular combination of different paradata types to detect potentially erroneous situations leading to increase in measurement error.


Collecting Sensitive Information using ACASI in the Kingdom of Saudi Arabia

Dr Zeina Mneimneh (University of Michigan) - Presenting Author
Mrs Jennifer Kelley (University of Michigan)
Dr Yasmin Altwaijri (King Faisal Specialist Hospital and Research Center)

The use of audio computer-administered survey interview (ACASI) mode to reduce reporting bias for sensitive information is well documented in Western countries (Couper, Singer, and Tourangeau, 2003; Epstein, Barker, and Kroutil, 2001; Lindberg and Scott, 2018). Other parts of the world are also increasingly adopting ACASI (Langhaug, Sherr, Cowan, 2010; Mensch, Hewett, and Erulkar, 2003). Yet, one region where ACASI has not been adopted yet is the Arabian Gulf. Understanding respondents' willingness to engage in an ACASI administration and its effect on improving the reporting of sensitive information in the Arabian Gulf is essential as cross-national studies that emphasize data comparability expand to include more countries worldwide. The issue of willingness to engage in an ACASI administration is important to explore given the novelty of the approach in this culture for respondents and interviewers. If many respondents refuse to use ACASI, the mode's effectiveness in improving the reporting of sensitive information could be jeopardized.

This paper examines respondents’ willingness to engage in ACASI administration and its effect on reporting sensitive information in the Saudi National Mental Health Survey (SNMHS). The SNMHS is the first national mental health survey in the Kingdom of Saudi Arabia (KSA) and is part of the cross-national World Mental Health (WMH) Initiative. All completed 4004 interviews were conducted face-to-face by trained interviewers who were gender-matched to respondents. Computer administrated personal interview (CAPI) was the main mode for the majority of the sections, and two separate ACASI administrations supplemented it (given the sensitive nature of many assessed topics). The first implementation included questions on suicide and marital relationship and was offered to all respondents in an ACASI mode with the option to switch to a CAPI mode. The second administration included questions on attitude towards substance and alcohol use, and conduct disorder behaviors and was randomly assigned to an ACASI mode for half of the sample and CAPI for the other half. Using the first administration, we will present findings on the rate of refusal to engage in ACASI (and to switch to a CAPI mode) and what respondent and interviewer level characteristics are associated with this switch. Using the second administration, we will explore further the effect of ACASI (compared to CAPI) on reporting sensitive information related to alcohol and drug use and engaging in conduct disorder behaviors. Differences in the rate of endorsing sensitive attitudes or behaviors and missing data rates will be compared between the ACASI and the CAPI modes.