ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Questionnaire translation and the global crises – new challenges and opportunities

Session Organisers Dr Brita Dorer (GESIS-Leibniz Institute for the Social Sciences)
Dr Dorothée Behr (GESIS-Leibniz Institute for the Social Sciences)
Dr Alisù Schoua-Glusberg (Research Support Services)
TimeThursday 20 July, 16:00 - 17:30
Room U6-01c

Recent global crises have entailed new challenges as well as opportunities also for the field of questionnaire translation. The pandemic has increased digitalisation in many aspects of daily life and working realities, and that has affected (questionnaire) translation, too. An example is the impossibility to meet in-person for Review and harmonisation meetings in the TRAPD scheme, which made alternative virtual formats more accessible and formalised. Survey mode shifts resulted in particular challenges, but also new opportunities for questionnaire translation, such as translating web-based self-completion surveys, or using interpreters in telephone interviews. The move away from face-to-face to self-completion surveys requires changes in questionnaire text, and thus new aspects need to be focussed on when translating these questionnaires, such as addressing the respondent correctly in their gender, which in f2f surveys is usually handled by the interviewer. Shorter turnarounds for spontaneous survey projects, for instance in war situations and with newly arrived migrant groups, require different methodological approaches, such as interpreting, translating on the fly, or translations by non-qualified questionnaire translators. The war situation and migration in general require larger numbers of languages offered at the national level, for enabling to include these population groups in the surveys.
This session invites papers on various topics related to the translation and interpretation of survey instruments. These may be related to the effects triggered by recent global crises (the war in Ukraine, the COVID-19 pandemic, climate change), some of which are described above, but also to more traditional issues in questionnaire translation. Further examples of topics include: translating for small hand-held devices, surveys with many languages for which no qualified questionnaire translators are available, developing best-practice guidelines for translating questionnaires under time and resource restrictions, translating established survey instruments for new population groups.

Keywords: Questionnaire translation, digitalisation, survey modes, migration, interpreting

Papers

Rapid Double Translation of the OECD/PISA Global Crises Module

Mr Steve Dept (cApStAn) - Presenting Author



The OECD’s Programme for International student Assessment (PISA ) developed the PISA Global Crises Module (GCM) to collect information on how education systems around the world have responded to the COVID-19 pandemic and how students’ learning experiences and school preparedness have changed. The PISA Governing Board (PGB) felt that it is important for PISA to collect this information and expect that data from the GCM will be useful in policy discussions around mitigating educational disruptions caused by future pandemics or other global crises.
A limited set of school and student questionnaire items were developed following a process that involved leading questionnaire development experts, PISA National Centres, as well as small-scale cognitive interview studies in three countries. As the timeline was a major concern, cApStAn was commissioned to produce 112 language versions for 87 countries/economies (the PISA Participants).
The respective roles of the PISA Participants and cApStAn were thus reversed compared to the conventional PISA translation and verification design, in which PISA Participants produce the translations and cApStAn verifies them.
The main challenge was that the GCM questions were added after the translations of the PISA Field Trial questionnaires had been finalised.
cApStAn first conducted “upstream” work to optimise the source text, via a translatability assessment, and the training of linguists involved.
Participants received a preview access to the translations and could suggest changes in an Excel form, providing rationales. The questionnaire developer (ETS) approved or disapproved requests and, once approved, the proposed translation was checked for linguistic appropriateness and centrally implemented by cApStAn linguists. Participants then performed a final review. If no further issues arose, the translation was signed off and the translations were entered in the online survey platform. All the steps are documented in a tracking form.


Survey instruments in cross-national research – how do social scientists vs. professional translators translate these?

Dr Dorothée Behr (GESIS - Leibniz-Institute for the Social Sciences) - Presenting Author
Dr Brita Dorer (GESIS - Leibniz-Institute for the Social Sciences)
Dr Diana Zavala-Rojas (UPF Barcelona)
Ms Danielly Sorato (UPF Barcelona)

In cross-national survey research, it is quite common that researchers themselves translate questionnaires into their target language, even though (some) guidelines recommend the use of professional translators at the initial translation stage. There is no empirical evidence yet on how these two groups translate questionnaires and what their respective involvement means for translation quality. An EC H2020 project paved the way for an experiment on survey translation procedures, which included not only professional translators translating from scratch, but also social scientists translating from scratch or doing (full/light) post-editing. (“Post-editing” refers to the correction of machine translation output.) For both our English-German and English-Russian experiments, we had 3 professional translators translating from scratch, 1 social scientist translating from scratch, 1 social scientist conducting light post-editing, and 1 social scientist conducting full post-editing.
We error-coded all translations, using the modified MQM-DQF translation error scheme (adapted to the text genre of questionnaires) and consensual coding. Individual error categories can be summarized as errors of accuracy, fluency, style, and survey-specific errors. In this presentation, we will compare the resulting translations across the different groups, taking into account their respective backgrounds and the use of machine translation outputs (if applicable). What are the strengths and weaknesses of each group? What does machine translation and post-editing mean for the results? Overall, this research will lead to more concrete hypotheses for a larger project, which will compare 15 professional translators vs. 15 social scientists, when each of them is using both translation from scratch and post-editing.


The TeO2 survey among non-French speakers

Mrs Constance Hemmer (Ined) - Presenting Author
Mrs Aurélie Santos (Ined)

Conducted by both French Institute for Demographic Studies (INED) and National Institute of Statistics and Economic Studies (INSEE) from 2019 to 2020, the ’Trajectories and Origins 2’ (TeO2) survey is a large-scale (N=27,181) statistical survey that examines the diversity of populations in metropolitan France and allows researchers to study the influence of migration background on individuals’ trajectories. The written questionnaire, designed for a face-to-face interview lasting one hour on average, only exists in French. However, the survey protocol was designed to reach the entire target population, in particular immigrants who are not fluent enough in French to answer the questionnaire. A set of documents in several languages was prepared to facilitate data collection in this sub-population: first a notification letter about the survey translated into 10 languages, then an identification card translated into 22 languages in order to identify the main language used by non-French-speaking immigrants, and finally a card book translated into 10 languages to help the interviewers during the data collection. In addition, a specific data collection procedure was carried out at INED by interviewer-translators hired expressly for live translation, especially for complex concepts dealt with in the questionnaire. This specific procedure made it possible to collect 228 interviews entirely in foreign languages (8 languages); and thus increase the response rate of non-French speaking immigrants, enhancing the survey's representativeness of this target population.
While the ambition of the non-French speaking procedure was to reach the immigrants who were the least fluent in French, our analysis of the survey data highlights a continuum in French proficiency among immigrant respondents; a continuum that we were able to capture thanks to the set of translation assistance documents implemented in the survey in general.


Who translates and why? Translation procedures in large-scale cross-national studies

Mrs Ulrike Efu Nkong (GESIS) - Presenting Author
Mrs Dorothée Behr (GESIS)

From the community of the social sciences, we know there is a wide variety of people translating questionnaires for cross-national studies. These can include professional translators, other language specialists, researchers themselves or even students. Existing study documentation, which is publicly available in some cases, allow only a minor insight into the translation procedures, giving hardly any detail on the actual translating personnel. However, quality in questionnaire translation plays a vital role in data quality of comparative surveys. The skills and qualifications of the people producing these translations should therefore not be neglected.

With this study, we want to shed light on who really does the initial translations of survey instruments in large-scale cross-national studies and why. To this purpose, we will survey the national coordinators of a variety of large-scale regional and global studies in the domain of the social sciences. We will ask about their choices of translation personnel and focus our research on details about the qualifications, skills and competencies of the translators used. And we will examine the overall translation procedure that integrates these initial translations and is meant to ensure the final quality and comparability of the survey instruments.

Computer-assisted translation tools as well as machine translation are widely used in today’s translation business. As a second point in our study, we therefore want to understand to what extent these tools have meanwhile also found their way into the field of questionnaire translation. By surveying national coordinators of cross-national studies on this topic, we will establish the status quo and find out about the potential impact of these tools on questionnaire translations and likely developments in this field.


The impact of machine translation on the dynamics of the Review step in a questionnaire translation project – an experiment for English-to-German translation

Dr Brita Dorer (GESIS-Leibniz Institute for the Social Sciences, Mannheim) - Presenting Author
Dr Dorothée Behr (GESIS-Leibniz Institute for the Social Sciences, Mannheim)
Dr Diana Zavala Rojas (Universitat Pompeu Fabra, Barcelona)
Dr Danielly Sorato (Universitat Pompeu Fabra, Barcelona)

In cross-cultural surveys, typically a source questionnaire is to be translated into multiple languages. The quality of these translations is highly important for comparability and overall quality of the final survey data. Over the past years, Machine Translation (MT) has improved in quality and is now increasingly used in the translation business for different text types. For questionnaires, until some years ago, the recommendation had been against the use of MT, as the quality had been considered insufficient. For bringing together the technical improvements of MT and the need to optimise questionnaire translations, we carried out highly standardised questionnaire translation experiments in the language pairs English-German and English-Russian. The TRAPD scheme, (Translation, Review, Adjudication, Pretesting, Documentation) has become the gold standard for translating questionnaires in cross-cultural contexts. We focus on the Review step, which is considered the heart of the TRAPD process. Three Review sessions English-German will be compared: in the baseline scenario, both initial translations (T in TRAPD) were drafted by human translators; in two treatment scenarios, one of the two translations resulted from Machine-Translation and Post-Editing, i.e., a human corrected the MT output, following specific guidance: one scenario involved a so-called ‘light’ post-editing (understandable, but not necessarily grammatically correct), and one a ‘full’ post-editing of the MT output (quality comparable to human translation). The overall usefulness of MT in our experiments has been studied in other papers (Zavala-Rojas et al.; Sorato et al., both submitted). The three Review sessions were recorded, transcribed, and then the transcripts coded into a coding scheme. We study whether the involvement of MT in the Translation step had an impact on the dynamics of the Review discussions, specifically, we focussed on whether MT facilitated the discussions or made them more challenging.