ESRA logo

ESRA 2023 Preliminary Glance Program


All time references are in CEST

Using Internet Surveys in Cross-National Research

Session Organiser Dr Tom W Smith (NORC at the University of Chicago)
TimeFriday 21 July, 09:00 - 10:30
Room U6-01f

Pushed by the rising costs of traditional modes (face-to-face, telephone) and the COVID pandemic, there is ever increasing use of internet-based surveys in both national and cross-national research. The use internationally is especially challenging given the varying levels of internet availability, the different levels of familiarity with doing surveys on-line, and the various sample frames that can be employed for drawing internet samples across countries. This session will examine the use of internet surveys cross-nationally and consider such issues as 1) how can cross-national comparability be maximized, 2) the use of one-time internet surveys vs. employing internet panels, 3) the comparability of internet surveys to surveys using other modes and in particular the reliable measurement of trends over time when traditional modes are replaced by internet modes, and 4) the variation that occurs due to the devices used to access the internet and complete surveys (e.g. smartphone vs. lap/desktops). Research will draw on the European Social Survey, the International Social Survey Program, and other cross-national studies.

Keywords: Cross-national survey research, Mode, Internet

Unpacking Survey Response Bias Across Countries, and Tips On How To Design Better Cross-National Surveys : Learnings From Research Experiments in Internet Surveys Across Southeast Asia

Dr Antarika Sen (Senior Manager at Milieu Insight (https://www.mili.eu/sg)) - Presenting Author

Existing research on cross-national surveys have shown that there are inherent cultural differences in the way people use rating scales. Ignoring it can lead to unreliable and erroneous conclusions.

This paper builds on existing literature by testing different types of (i) response scales (e.g., agreement scales, likelihood scale, etc.) (ii) response scale lengths (iii) scale design formats (e.g., standard radio button vs a spinner design), and (iv) survey topics. This particular paper also focuses on findings from mobile-based surveys. To this end, we carried out a set of controlled experiments across N = 15, 415 respondents in Singapore, Malaysia, Indonesia, Philippines, Vietnam, and Thailand

There were three key learnings :

1. We found that on average, across multiple response scale lengths, Vietnam and Philippines were twice as likely to use the highest rating on a scale compared to Singapore. Additionally, Vietnam, and the Philippines were most likely to pick the top rating choice even across different question topics.

2. The design format in which response scales are presented also had a significant impact on the results with poor designs exaggerating differences between countries. For example, we ran one version of a survey using a standard single-select rating scale (vertical likert-type scale presented as radio buttons), and another version of the same survey using a spinner format. Our findings showed that the spinner design helped to reduce top-choice bias by a significant margin for countries with strongest top choice bias (Vietnam and Philippines).

3. Cleaning out inattentive respondents (e.g., straightliners) reduced top-choice bias in some countries which if unaccounted for can render inter-market comparisons inaccurate. We also found that compared to standard rating scales, using a spinner rating scale led to significantly lower straight-lining for 11-point response scales.


Where are my completes? Leveraging different response modes in a German online probability panel.

Ms Elena Babamova (Lifepanel) - Presenting Author
Mr Carsten Broich (Lifepanel)

In online probability panels or mixed mode probability panels it is so far standard to receive the majority of the completions via Email invites. To cover part of the offline population, elements like mail invitations, tablets or calls to landline phones are used additionally. Nevertheless, not much research has been conducted on the effectiveness of email as a major means for online probability panels. Within Lifepanel, members are recruited through phone for which an RDD Dual-Frame is the base for the probability selection. With the majority of the sample recruits coming from mobile phone numbers, it became questionable whether the email address is really required or whether the invites can be pushed to the mobile phone number via SMS or WhatsApp and yield higher response rates. As part of an adaptive design, it was tested to which extent it is possible to predict the most likely choice for response modes based on the demographics and use this knowledge to increase the panel response rates.


State of surveys: a comparison of NPS scores in 10 countries over 10 years

Ms Laura Wronski (SurveyMonkey)
Mr Sam Gutierrez (SurveyMonkey) - Presenting Author
Ms Zoe Padgett (SurveyMonkey)

Analyzing metadata from millions of survey responses from the SurveyMonkey platform the past 10 years, we explore trends in the survey-taking experience globally, and across countries and regions. Cross-cultural differences in response distributions have been well-established in the academic literature. In this research, we examine responses to all NPS questions asked in 10 countries around the world to compare how those scores have changed over time and how consistently their country-by-country variation is. This research highlights the changing respondent experience, and the various considerations that survey creators will have to keep in mind when designing questionnaires.