ESRA 2025 Preliminary Program
All time references are in CEST
Questionnaire translation in a changing world: challenges and opportunities |
Session Organisers |
Dr Alisú Schoua-Glusberg (Research Support Services) Dr Brita Dr. Dorer (GESIS-Leibniz Institute for the Social Sciences) Dr Dorothée Behr (GESIS-Leibniz Institute for the Social Sciences)
|
Time | Wednesday 16 July, 11:00 - 12:30 |
Room |
Ruppert paars - 0.44 |
With the world becoming more and more globalised and population demographics changing at an ever higher pace, a good quality level of questionnaire translations is increasingly important to develop reliable cross-cultural data.
The digital transition has also been entering the fields of translation in general and questionnaire translation in particular: While good practice recommendation in cross-cultural survey methodology is still to apply team approaches, making sure to involve appropriately trained and experienced questionnaire translation experts, ideally TRAPD (consisting of the steps Translation, Review, Adjudication, Pretesting and Documentation), digital innovations such as Machine Translation or Artificial Intelligence are more and more entering the fields of (questionnaire) translations. Technical innovations related to questionnaire translations can have manifold forms, such as platforms allowing a smooth handling of the workflow to develop, translate and later field questionnaires in manifold languages. Or, for instance, crowd-based translation schemes become more and more popular and may contribute to translating questionnaires too. Where are the strengths and where the weaknesses of such developments? Which role should AI play in the translation of survey instruments?
This session addresses various aspects of questionnaire translation, whether related to digital innovations or not. We invite papers on various topics related to the translation and interpretation of survey instruments, referring for example to: experiments on new techniques or alternative methods, studying challenges of specific language pairs or translation methods. How to translate certain questionnaire elements, such as translating answer scales, approaches to develop questionnaire translations of minority languages, or comparing effects of different translation quality assessment approaches on survey data.
Keywords: Questionnaire translation, cross-cultural surveys, survey methodology, digitalisation
Papers
Questionnaire design of the revised Human Values Scale for cross-cultural surveys
Mr Tim Hanson (European Social Survey HQ (City St Georges, University of London)) - Presenting Author
Dr Elena Sommer (German Institute for Economic Research (DIW Berlin))
Dr Brita Dorer (GESIS)
Ms Ulrike Efu Nkong (GESIS)
Professor Shalom Schwartz (Hebrew University of Jerusalem)
Since its first round, the European Social Survey (ESS) has included a 21-item measure of ten basic values shared across cultures. This instrument developed by Shalom Schwartz and known as the Human Values Scale (HVS) has been widely used in various disciplines. Following recent recommendations from Schwartz, the ESS will use the new revised 20-item HVS starting from its 12th round (2025-26). The revised scale offers greater reliability and uses shorter and simpler items than the initial scale. It also introduces a single gender-neutral version for all respondents, replacing the previous separate male and female versions. This change is particularly important given the ESS’s upcoming transition to self-completion mode and use of a paper questionnaire. To ensure that the new scale measures the same concepts in all participating countries and languages, the ESS conducted an “advance translation” review producing a list of translation annotations to clarify the precise meaning of source items.
The new HVS will also be fielded for the first time in the German Socio-economic Panel (SOEP), including the refugee sample, in 2025. While both surveys use the same source text, the SOEP additionally employs images to visualise the statements, aiming to enhance respondents’ engagement in the self-completion module on social distance at the end of a long interview. A multilingual cognitive pretest was conducted to assess whether the images match the statements and whether the statements were interpreted consistently by respondents with different migration background.
As part of a collaborative effort, findings from the ESS’ advance translation and the SOEP’s cognitive pretest were used to refine the final version of the new HVS. Our presentation provides an overview of the new HVS highlighting key findings from the testing procedures and the challenges of cross-cultural adaptation.
Using measures of similarity and dissimilarity to assess: How is post-editing questionnaire translations different from from-scratch translations, in general and across different groups of translators (professional translators vs. social scientists)?
Dr Dorothée Behr (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Ms Ulrike Efu Nkong (GESIS - Leibniz Institute for the Social Sciences)
Dr Anke Radinger (GESIS - Leibniz Institute for the Social Sciences)
Mrs Chia-Jung Tsai (GESIS - Leibniz Institute for the Social Sciences)
The DFG-funded project TransBack looks into translation methodology in cross-cultural survey projects, focusing on the role of translators and machine translation in an empirical translation experiment. To this end, 16 professional translators and 16 social scientists translated questionnaire items from English into German; 25 items had to be translated from scratch, 19 items were provided as machine translated versions (deepl) and participants had to correct them (post-edit in translation jargon) as they deemed necessary. In this presentation, we will use measures of (dis-)similarity of translation versions to each other and towards the source text (e.g., Levenshtein; ratio of unique translations per segment) to delineate the two translation procedures – from scratch translation and post-editing. We will further delve into (dis-)similarities as produced by translators vs. social scientists when translating from scratch/post-editing, since the background of the persons involved in translation procedures equally plays a role in shaping translation outputs (Harkness, 2003). The goal is to understand the changes that come with new technologies (here: with machine translation) and their impact on the work of translators of different backgrounds. Knowledge about differences and potential shortcomings will help to underpin translation guidance and translation models and steer the process into the right direction.
Survey Translation in a Technology-Rich Environment: Tweak the Tools Or Adapt Best Practice?
Mr Steve DEPT (cApStAn) - Presenting Author
The team translation approach to survey translation was developed at a time when data collection
instruments were pencil-and-paper questionnaires. The TRAP-D design (Translation, Review,
Adjudication, pre-testing and Documentation, see e.g. Harkness et al. 2003, 2004) is an often-cited
methodology for survey translation and adaptation, and it has been widely recognised as best practice
by the scientific community.
Meanwhile, technology has evolved rapidly. Questionnaires have become digital and are delivered on computers, tablets, and smart devices. New survey modes have ushered in new types of questionnaire adaptation, e.g. adaptation to self-completion mode. The combination of mature translation technology, such as computer-aided translation tools (CAT tools), and new advances in machine learning, machine translation and large language models (LLMs) can bring efficiency gains but require an expertise that is not always available to survey methodologists.
The question arises whether it is preferable to tweak tools and adapt technology so that it can accommodate established best practice in survey translation, or whether best practice should evolve to leverage new advances in technology.
In this presentation, we argue that some components of the team translation approach and related translation and adaptation guidelines may have retained a quasi-dogmatic status although they are no longer adapted to the current technological landscape. It has become possible to design workflows in which repetitive actions are automated and in which the input of professional linguists and subject matter experts (SMEs) adds value at very specific points in the translation management systems. This requires a level of professionalisation in survey translation, discernment in using AI purposefully,
and pre-testing of translated version sof the survey questionnaires.
The author will provide concrete examples from international surveys in a technology-rich environment.
The Influence of Translator Backgrounds on Survey Measurements: Evidence from a Survey Experiment
Miss Chia-Jung Tsai (GESIS – Leibniz Institute for the Social Sciences) - Presenting Author
Dr Clemens Lechner (GESIS – Leibniz Institute for the Social Sciences)
Dr Dorothée Behr (GESIS – Leibniz Institute for the Social Sciences)
Miss Ulrike Efu Nkong (GESIS – Leibniz Institute for the Social Sciences)
Dr Anke Radinger (GESIS – Leibniz Institute for the Social Sciences)
Questionnaire translation is essential for transferring survey instruments to different languages and cultural contexts, but it can unintentionally affect how questions are understood and measured. These changes may impact the reliability and validity of survey results. This study focuses on how the background of translators affects key statistical properties, such as variance, skewness, and issues with reverse coding, and explores whether these effects can affect responses of survey items.
To investigate this potential effect, we will conduct a survey experiment where a standardized English questionnaire was translated into German. The translations were created by two groups of translators: 16 professional translators and 16 social scientists. By analysing the data collected from these 32 questionnaires, we will examine how differences in translator backgrounds affect the quality of item measurements, including variance, skewness, and errors in reverse-coded items.
The aim of the study is to highlight the significance of translation choices and translator backgrounds in shaping survey outcomes. Even small differences in wording or cultural framing can introduce bias, affecting item measurements and the underlying concept constructs. This study contributes to improving survey translation methodology and data comparability in cross-cultural research contexts.