ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 11:00 - 12:30 Room: F2 106


Questionnaire translation in theory and practice: achievements, challenges, and innovations 1

Chair Dr Dorothée Behr (GESIS - Leibniz Institute for the Social Sciences )
Coordinator 1Ms Brita Dorer (GESIS - Leibniz Institute for the Social Sciences)
Coordinator 2Dr Alisú Schoua-Glusberg (Research Support Services Inc.)

Session Details

The field of questionnaire translation in cross-national and cross-cultural research has slowly begun, but it has taken up speed and become prominent in survey projects and research since the 1990s at the latest. Best practice in terms of methods (committee approach, back translation, pretesting, translator profiles, etc.) has driven the field; this topic has made immense progress but it is a never-ending story nevertheless, especially if considered in a cross-disciplinary perspective (survey methodology, health, psychology, education, business). The importance of cultural factors, which impact both on language and item content, is nowadays pervasive. However, within survey methodology but also in other and across disciplines, many different meanings – and possibly false restrictions – are attached to the concepts of adoption, translation, adaptation or localization. There is more agreement on the provision of background information on concepts or terms, which was already called for in 1948 (!) (Barioux) and is now a key feature of comparative research. There is by now also agreement on early integration and involvement of translation and translation experts when designing a source questionnaire. The methods of advance translation or translatability assessment embody this strand. IT and translation tools are slowly gaining a foothold in the form of dedicated portals and translation tools, or of corpus linguistics. IT supports both the macro-processes (various stages of translating, assessing and testing) and the micro-processes (the translation as such). Against the backdrop of all these developments, it is a bit surprising that (systematic) empirical research on the effects of different translation versions is still missing – but also here, research has sprung up, the European SERISS project being a prime example.

Researchers and practitioners are invited to present on achievements in the field of questionnaire translation, on topics that are still inconclusive or challenging, and on innovations. Presentations can tackle any of the aforementioned themes, but they can also go beyond those. Presenters can look into the theory but also present their applications in cross-national and cross-cultural survey research and their lesson learned.

Paper Details

1. The impact of information and technology in translation processes for international large scale assessment studies: First results of a dissertation project
Mrs Britta Upsing (DIPF)

Players from different professional backgrounds are involved in the process of translating test items for international large-scale assessment studies like the Programme for International Student Assessment (PISA) or the Programme for the International Assessment of Adult Competencies (PIAAC). These studies recommend a variation of the TRAPD-approach for the adaptation of their test items. In short, in TRAPD the source instrument is Translated independently by two different translators. The two target versions are then Reviewed by a team to create a single target version, which is then Adjudicated as necessary and Pretested. The whole process is Documented (Harkness et al. 2010, p. 128). The recommended translation procedure in PIAAC or PISA includes – as a minimum – double translation, reconciliation of the two target version, an external review by experts, verification and final layout check. The outcomes of the whole process are documented per language in translation monitoring forms (OECD 2014, Ferrari et al. 2013). Trainings, guidelines, and software tools are provided to the different process participants to ensure that they are able to execute their tasks.
PIAAC was the first international large-scale computer-based (cba) study which also included automated scoring of test items, and consequently the translation of both the cba-items and their scoring information (Kirsch, Yamamoto 2013, pp. 1–2). Therefore, the adaptation tasks arising from this new setup resulted in new challenges for all players involved (cf. Upsing et al. 2011).
In this dissertation project, the qualitative content analysis method (Mayring 2010) was used to explore the translation monitoring forms for the 34 different language versions in the PIAAC study, with the aim of identifying the questions and problems that had arisen and the corrections which were implemented. The underlying question was whether translators across target languages face similar difficulties in their work, and whether they find similar answers to resolving these (linguistic or technical) issues. The broader aim was to qualitatively analyze the impact of information, information technology and communication on the adaptation process.
The results of this analysis provided the background for an interview study with 20 reconcilers, translation managers, translators, verifiers and project managers. The focus of the interviews was set on the information needs and the information environment that the different actors face when completing their tasks. The interviews were designed to better understand the preferences and priorities of the various parties. A particular area of emphasis was to determine how these individuals deal with the information they receive, and to identify the extent to which software tools are integrated into the adaptation process. The interviews were therefore subdivided into three distinct subject areas: technical support and software tools, instructions and other aids to support the process, and ongoing communication during daily work.
The results of the interview analysis will be presented. Preliminary results indicate, for instance, that translators are often overwhelmed by the information they receive, that face-to-face training sessions and personal involvement in the processes motivate them to excel, and that software tools play an ambiguous role.


2. Asking Moses to help with translation verification
Dr Yuri Pettinicchi (Max-Planck-Institute for Social Law and Social Policy (MPISOC))
Mr Paulius Šukys (Technische Universität München (TUM))

In its upcoming seventh wave (fieldwork in 2017), the Survey of Health, Ageing and Retirement in Europe (SHARE) will ask over a thousand questions in 39 different languages in 28 countries.
Translations from the English source questionnaire into the target languages are handled through an online tool that allows software developers to build up the CAPI instrument in the national language.
SHARE ensures high quality translation using TRAPD procedure. Unfortunately, the complexity of the online tool creates challenges to translators in their task that make the outcome somewhat error-prone.
Our aim is to perform translation verification efficiently within the constraints of budget and limited manpower. This paper describes our approach to translation verification.
We built up a program that reads the outcome provided by translators, stores translation and metadata, performs sanity checks, i.e. empty field or wrong indexation, and a content related check.
For the latter we rely on the use of a statistical machine translation system (Moses) to provide back translations. Moses uses a corpus of approved pairs of translations and a calibrated model to provide the most likely back-translation.
As a last step our program evaluates text similarity comparing the back translation provided by Moses and the original English version. The final outcome of our program is a report on the quality of the translations, flagging text to be re-checked.
Our approach processes high volume of data/text efficiently. In this paper we measure the incidence of the false positives, i.e. flagged items that were translated properly. This program still needs further improvement to be reliable for long sentences and out-of-the context situations.


3. The Translation Management Tool (TMT) used for a survey applying the TRAPD model: the example of the European Social Survey (ESS), Round 8
Ms Brita Dorer (GESIS-Leibniz Institute for the Social Sciences)
Mr Maurice Martens (CentERdata (University of Tilburg))
Mr Sebastiaan Pennings (CentERdata (University of Tilburg))

The Translation Management Tool (TMT) is an online service for supporting questionnaire translation processes for large multilingual surveys. It has been used since 2004 for the Survey of Health, Ageing and Retirement in Europe (SHARE) and over time has supported several other studies. In order to be useable also for surveys that apply the ‘team’ or ‘committee approach’ for their questionnaire translation, it has been adapted for use by the European Social Survey (ESS). The ESS has used the team approach – in the extended form of a ‘TRAPD model’ – since its very beginning.

Under the SERISS (Synergies for Europe's Research Infrastructures in the Social Sciences) cluster project, the TMT is used by the ESS in its 8th round for the first time: in 2016, three countries used the tool for producing their final national survey instruments: this exercise was thus not a ‘dummy’ test, but a real-time application of this platform that was completely new for the ESS context which previously has been using excel files for their questionnaire translation activities.

The ESS follows the TRAPD model, developed by Janet Harkness and consisting in parallel Translations, a Review session, Adjudication, Pretest and Documentation for each of its national language versions. In addition, all language versions are subject to two checks of translation quality: verification by the service provider cApStAn as well as SQP Coding by the ESS team at Universitat Pompeu-Fabra (UPF) in Barcelona. Also these steps were handled in the TMT.

The three national teams volunteering to use the TMT were Lithuania (Lithuanian / Russian), Poland (Polish), Russia (Russian), so the testing included one multilingual country (Lithuania) and one ‘shared language’, fielded in more than one country (Russian).

At the time writing the abstract, the usage of the TMT in ESS Round 8 is still not completed. The paper will present first findings from the perspective of the national teams, from the project management and also from the programmers’ side. Advantages and disadvantages detected as well as solutions found and ideas for future use and development will be presented and discussed, also considering potential other surveys.

The longer-term goals of this first real-time use of the TMT in ESS Round 8 consist on the one hand in making the TMT fit to be used by the ESS in future rounds for all its language versions, in the view of interlinking it with the ‘Questionnaire Design Documentation Tool’ (QDDT) and the ‘Question and Variables Databank’ (‘QVDB’), equally developed under SERISS, and so to handle the whole questionnaire translation workflow together with questionnaire design and a searchable database for questions and variables in one platform. On the other hand, the ESS is being used as a role model for other cross-cultural surveys in order to prove that the TMT can successfully be applied by surveys applying similar approaches, relying on committee or team translation, to translate their questionnaire into multiple language versions, which means many different actors in various roles are involved.


4. Assessing the impact of different German-language translations of ESS Round 3 items on the resulting data
Dr Dorothée Behr (GESIS - Leibniz Institute for the Social Sciences)

Questionnaire translation is known to be a weak link in cross-national survey research. To enhance comparability in a comparative survey, multi-step translation and assessment methods such as the TRAPD model (Harkness, 2003) are often employed. However, comparability should not only exist between the source questionnaire (which is often English in major international studies) and a particular translation, but ideally also between the different translated versions of a source questionnaire. The latter requirement becomes particularly visible if one looks at the translations into so-called shared languages, such as the translations of a survey questionnaire into German for Germany, Austria, and Switzerland. While in the past, countries have often worked individually even though they “shared” the same language, nowadays increased levels of cooperation can be noted (e.g., European Social Survey, 2014). This article looks at experiments using different German-language translations of the ESS Round 3 questionnaire (rotating module on personal and social well-being). Items were selected for which the Austrian, German, and Swiss translation differed markedly – at least at face value. Items were then implemented in randomized web experiments in the GESIS Online Panel Pilot (probability-based panel of Internet users in Germany). Analyses include statistical analyses but also the analysis of open-ended probing questions. Results are presented alongside with pointers with what to look out for in translation.