ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Wednesday 17th July 2013, 11:00 - 12:30, Room: Big hall

Innovations in measurement instrument construction for web-based surveys 1

Convenor Mr Simon Munzert (University of Konstanz, Germany)

Session Details

Large parts of the increasing research body on web-based surveys deal with coverage, sampling and nonresponse issues, and therefore questions of representativeness. Less frequently discussed are measurement issues which arise from the unique way web-based surveys are conducted. In comparison with other modes, web-based surveys provide a bouquet of new tools and methods which allow for previously unknown flexibility in designing measurement instruments: The respondent can be presented (audio-)visual additional to (or even as a substitute of) verbal information, question and item order can be easily randomized, and valuable paradata like response latencies, key stroke measures or server-side paradata can be collected on-the-fly. These tools may help reduce respondent's burden when answering the questionnaire, but also allow for developing completely new instruments of existing concepts (e.g., visual measures of any kinds of knowledge). Although measuring opinions, facts etc. in an online setting might induce additional measurement bias in comparison with other modes, the web survey toolbox may provide instruments which can and should be used to fight these sources of error.
The goal of this panel is to bring together scholars who make use of new web survey tools to improve existing or construct new measures of a variety of concepts. The focus hereby is not so much on purely stylistic adaptions of the questionnaire layout, but on development of new instruments with methods going beyond ordinary question wording or response scale modifications. Papers to be presented in this session might deal with one of the following topics:
- innovative adaptation of existing or development of new instruments in web-based survey setting by use of unique web survey tools
- usage of web survey paradata to reduce survey error, or as a substantive measure
- studies implementing a cross-validation or MTMM design


Paper Details

1. Measuring political knowledge in web-based surveys

Mr Simon Munzert (University of Konstanz)
Mr Peter Selb

"Political knowledge is to democratic politics what money is to economics: it is the currency of citizenship" (Delli Carpini and Keeter 1996). For that reason, a whole bulk of electoral research investigates the individual-level determinants (Leighley 1991; Luskin 1990), the institutional antecedents (Benz and Stutzer 2004; Gordon and Segura 1997) and the consequences of political knowledge for individual voting behavior and election outcomes (Alvarez 1998; Bartels 1996; Ferejohn and Kuklinski 1990; Luskin 2002; Macdonald, Rabinowitz, and Listhaug 1995; Sniderman, Brody, and Tetlock 1993). All such studies require measures of political knowledge, most of which are based on survey items. Yet, it is not self-evident how to collect information on individual political knowledge in web-based surveys. Well-established knowledge items about political facts do not easily lend themselves to being administered in online surveys, since respondents may be tempted to cheat, that is, to look up the correct answers via the web. We propose an alternative measurement instrument that almost exclusively relies on visual instead of verbal assignments, thus making it more difficult for the respondents to cheat. Additionally collected response latencies can be utilized to ex post identify cheating attempts. A battery of items has been tested on a sample of students, which also allows us to provide results from an item analysis and precise response time estimates.


2. Differentiated Measurement of Political Knowledge in Web Surveys: Evidence from Two Online Experiments

Ms Elena Wiegand (University of Mannheim)

Political knowledge is one of the major factors for explaining political attitudes and behavior. In recent literature, it is frequently equated with factual knowledge about political institutions. Although, this kind of information is normatively desirable, institutional proficiency might not be necessarily important for someone's vote choice. In order to examine electoral behavior, more precise assessments of political knowledge about candidates or party positions can help to make more sense of the growing complexity and heterogeneity of vote decisions. Such measurements were, however, not available up to now. Due to two new web-based surveys accessible from German Longitudinal Election Study (GLES), innovative measurements of political knowledge can hence be monitored.

This paper assesses the new instruments and answers the question, how well these might discriminate between respondents. The innovative tools include questions about the appearance of politicians, their policy positioning, current and past political roles as well as an allocation experiment. Because cheating on knowledge issues is quite easy at web surveys, respondents were randomly split into two groups and asked to allocate politicians to their particular party. While one half of the respondents have been presented pictures of politicians, the other half has received classical matrix questions with the names of politicians. By comparing these groups, first results show that the new format can discriminate between respondents very well and contributes substantially toward a more differentiated explanation of electoral behavior. Finally, the advantages and drawbacks of these new measurements are discussed.


3. Hearing Voices: Supporting online questionnaires with Text-to-Speech technology

Mr Joris Mulder (CentERdata, Tilburg University)
Dr Natalia Kieruj (CentERdata, Tilburg University)
Mr Arnaud Wijnant (CentERdata, Tilburg University)
Dr Salima Douhou (CentERdata, Tilburg University)

Representativeness of online panels is often considered in respect of the main socio-economic variables: gender, age, region, income and education. However, several hard-to-reach groups in society are often not considered. For instance, illiterate people or people having problems reading text from computer screens may be underrepresented in panels due to these handicaps.
We investigate whether there are primacy and recency effects present and if they vary as a result of aural versus visual presentation. We check if respondents give more social desirable answers if aural presentation versus visual presentation is used. Also, we examine the employability of single questions versus grid questions in a text-to-speech setting. Finally, we check if aural presentation of questionnaires with a female versus a male voice has an effect on response behavior. By integrating text-to-speech techniques in questionnaires respondents were able to hear the survey questions as well as reading them. The speech synthesis is done at the server side, so that respondents did not have to install extra software to complete the survey.
A questionnaire was distributed in the LISS household panel of CentERdata. The experiment consists of a 2 (reading disabilities versus no reading disabilities) x 5 (traditional versus computer voice male/female versus human voice male/female) mixed between-within subjects design. The overarching goal of this research is to determine whether text-to-speech techniques can be a valuable tool for online questionnaires and what (methodological) effects this has on response behavior.



4. Improving Cheater Detection in Web Based Randomized Response Using Client-Side Paradata

Ms Kristin Dombrowski (Martin-Luther-University Halle-Wittenberg)
Professor Claudia Becker (Martin-Luther-University Halle-Wittenberg)

There exist several techniques to reduce the problem of misreporting in answering sensitive questions. The best known methods are the randomized response-technique (Warner, 1965) and the item count-technique (Miller, 1984). Both methods can help to reduce the reporting bias if the respondents understand and follow the techniques´ instructions. Cheating detection models (CDM) take up the problem that the respondents may not act according to the instructions (and, hence, are cheating). Using the estimated proportion of cheaters among the respondents, the estimation of the prevalence of the sensitive characteristic can be improved (Clark & Desharnais, 1998). More detailed information about the (psychology of the) answering process could help in getting even better estimates of the proportion of respondents carrying the sensitive characteristic. Web Surveys contribute to solving this problem via automatically collected client-side paradata. This paradata may be used to distinguish cheaters and non-cheaters more clearly and eventually to improve the survey quality.
In this presentation we show an extended CDM for the randomized response technique using selected paradata. Thereby, the analysis of paradata is based on technique-specific assumptions on the psychological and sociological characteristics of cheating respondents'.