ESRA logo

ESRA 2021 full program

Friday 2 July Friday 9 July Friday 16 July Friday 23 July

Short courses on Thursdays



Methods for international surveys

Session Organiser Dr Farsan Ghassim (University of Oxford)
TimeFriday 9 July, 15:00 - 16:30

This open session brings together different papers presenting innovative methods for different stages of multinational survey research - from invitations to participate, to the design and translation of questionnaires, to the analysis of survey results.

Keywords: methods, multinational, international, design, analysis

“Dear Colleague” – an experiment with survey invitations

Mr Christophe Heger (DZHW) - Presenting Author
Dr Jens Ambrasat (DZHW)

A key element for the success of quantitative surveys such as the DZHW Scientist Survey [Wissenschaftsbefragung] is the implementation of a design appropriate for the target audience and the specific context (Tailored Design Method, Dillman et al. 2014). With regard to the salutation in the invitation letter, the literature suggests that a personalized salutation is conductive to higher response rates and thus better data quality. However, considerable resources have to be expended to obtain, clean and maintain reliable datasets of addresses, names, academic titles and apparent gender. A cost-benefit analysis therefore is appropriate, especially if a standardized, impersonalized and ungendered greeting – “Dear Colleague” [Liebe Kollegin, lieber Kollege] – could indeed generate a similar response rate at a much lower resource cost by simplifying the task of human address investigators to merely the email address. Such a simplified task also opens up the possibility of automatization, as online tools are much more capable of reliably identifying email addresses than correctly categorising first and last names and titles.

For the 2019 edition of the Scientist Survey 60.000 scientists were contacted and invited to participate via email. The email addresses as well as additional information such as first and given name, academic title and apparent gender were collected by hand (which implies a high cost) from the public websites of universities. These addresses were then cleaned and prepared for the invitations. To identify which salutation is the most appropriate for the different academic status groups (professors, pre- and postdoctoral researchers), we implemented a survey experiment with the first third of the addresses as a mean inform our decision on the best salutation for the remaining 40.000 scientists. We tested four very common and plausible (not artificial) salutations that necessitate varying amounts of the collected information (name, title, position, gender). Analytically, these forms reflect dichotomies by dimensions such as personalized/impersonalized, formal/less formal as well as in-group signalling (“Dear Colleague”). In the presentation we describe our experimental approach and show how such a real-time experiment can be beneficial for the course of the survey. Our experiment also shows how preferences of scientist for different salutations are linked to academic status as well as gender.


Self-Commitment on No-Opinion Responses: A New Survey Design Method

Mr Farsan Ghassim (University of Oxford) - Presenting Author

Survey researchers are divided on whether questionnaires should include no-opinion answer choices (e.g. “don’t know”) explicitly or not. The “traditional” camp argues that failing to do so leads survey participants to report “non-attitudes”, i.e. supposed positions that cannot be characterized as attitudes in any meaningful way. The “revisionists” counter that the explicit inclusion of no-opinion options induces satisficing among respondents who would otherwise report a meaningful attitude. This paper suggests a compromise between these two approaches. In line with the revisionist argumentation, I suggest not to offer a no-opinion option for every survey question explicitly. However, in the spirit of traditionalists, I do recommend making the non-response option explicit at the beginning of the survey. Hence, I suggest asking respondents at the outset to self-commit to the desired behavior, i.e. only skipping questions that they cannot or do not want to answer even after careful consideration (e.g. questions that are deemed too sensitive). In February 2021, I conduct online survey experiments on nationwide samples in Africa, the Americas, Asia, Europe, the Middle East, and the Pacific region, in order to test my proposed method against common alternative approaches: first, offering a no-opinion option for every question; second, including a no-opinion filter before questions; and third, not offering a no-opinion choice at all. I hypothesize that my method is superior to these common alternatives in terms of decreasing both item-non-response and measurement error, thereby increasing survey data efficiency.


The compilation of the [MCSQ]: Multilingual Corpus of Survey Questionnaires

Dr Diana Zavala-Rojas (Universitat Pompeu Fabra, ESS ERIC)
Ms Danielly Sorato (Universitat Pompeu Fabra)
Dr Lidun Hareide (Moreforsking) - Presenting Author
Dr Knut Hofland (Formerly University of Bergen)

Download presentation

This presentation describes the design and compilation of the Multilingual Corpus of Survey Questionnaires (MCSQ), the first publicly available corpus of international survey questionnaires. The corpus was compiled from questionnaires from the European Social Survey (ESS), the European Values Study (EVS) and the Survey of Health, Ageing and Retirement in Europe (SHARE) in the (British) English source language and their translations into eight languages, Catalan, Czech, French, German, Norwegian, Portuguese, Spanish and Russian, as well as 29 language varieties (e.g. Swiss French). As a case study, we use the MCSQ to extract information and exemplify two types of problematic translations in survey questionnaires: The first type relates to the choice of terms in the source document, which have resulted in poor translations. Specifically, we relate these choices to idioms and fixed expressions. The second type relates to cases where the semantic variation of translation choices exceed the scope allowed to maintain the psychometric properties across languages, concretely in the intensity attached to verbal labels of response scales. With these examples, we aim to demonstrate how the corpus methodology can be used to analyse past translation outcomes and to improve the questionnaire translation methodology.


When in Rome… The effect of providing examples in a survey question across countries

Dr Eva Aizpurua (ESS ERIC HQ - City, University of London) - Presenting Author
Dr Gianmaria Bottoni (ESS ERIC HQ - City, University of London)
Professor Rory Fitzgerald (ESS ERIC HQ - City, University of London)

Download presentation

To optimally answer a survey question, respondents must interpret the intent and meaning of the question, retrieve the information needed to answer it from their memories, integrate this information into a judgment, and map this judgement onto the responses provided (Tourangeau, Rips & Rasinski, 2000). Multiple strategies are used to facilitate this demanding process, from the design of the questionnaire (e.g., shortening reference periods to reduce recall biases) to its administration (e.g., randomising the order in which response options are shown to minimise response-order effects). One of these strategies is the use of examples in survey questions. By clarifying whether to include borderline instances, reminding respondents of examples that might go unnoticed, or offering hints regarding the types of cases that researchers are interested in, examples intend to facilitate the comprehension and retrieval stages of the response process. In this study, we use data from CRONOS, a probability-based online panel implemented in Estonia (n = 730), Slovenia (n = 685), and the UK (n = 529) during Round 8 of the European Social Survey (2016). Using a between-subjects experiment, respondents were randomly assigned a survey question assessing confidence in social media, using Facebook and Twitter as examples (n = 971), or another condition in which no examples were offered (n = 973). The results show that confidence in social media was significantly lower in the example condition, although the effect size was small. Differences in social media confidence were found between the countries, but the effect of providing examples was comparable across countries. The implications of these findings and how the fit in with previous work conducted in single population studies are discussed.


An overview of the size of measurement errors of a subset of questions of the European Social Survey

Mr Carlos Poses (RECSM - Universitat Pompeu Fabra) - Presenting Author
Dr Melanie Revilla (RECSM - Universitat Pompeu Fabra)
Mr Marc Asensio (RECSM - Universitat Pompeu Fabra)
Mrs Hannah Schwarz (RECSM - Universitat Pompeu Fabra)
Dr Wiebke Weber (RECSM - Universitat Pompeu Fabra)

The measurement quality of survey data is crucial since it determines the accuracy of the information on which many studies and key decisions are based. In this paper, we estimated the measurement quality (defined as 1-measurement errors or as the product of reliability and validity) of 67 common social sciences questions that were part of Multitrait-Multimethod experiments in the seven first rounds of the European Social Survey. These questions were asked using response scales with different characteristics and in a total of up to 41 country(-language) groups. Our results show that measurement errors are omnipresent: the average measurement quality across all questions is .65. Overall, thus, an average 35% of the variance in the observed survey answers can be attributed to measurement errors. Furthermore, the size of errors varies across questions as well as across country(-language) groups. The average measurement quality of each question across all country(-language) groups ranges from .25 to .88, depending on the response scale and topic, and the average measurement quality in each country(-language) group across questions ranges from .52 to .76. Thus, the impact of measurement errors on applied research can be different depending on the exact question formulation used, and on the country(-language) of interest. Consequently, in each single study, researchers should assess the size of the measurement errors of their variables and how that affects their results. Besides, procedures to reduce and correct for measurement errors are encouraged.