Conference Programme 2015
Tuesday 14th July Wednesday 15th July Thursday 16th July Friday 17th July
Friday 17th July, 13:00 - 14:30 Room: O-206
What does it mean to produce equivalent questionnaire translations 2?
|Convenor||Dr Dorothée Behr (GESIS - Leibniz Institute for the Social Sciences )|
|Coordinator 1||Dr Alisú Schoua-glusberg (Research Support Services Inc.)|
|Coordinator 2||Ms Brita Dorer (GESIS - Leibniz Institute for the Social Sciences)|
Session DetailsEquivalent data in cross-cultural and cross-national surveys is the precondition for any meaningful comparison across countries or cultures. Equivalence is a complex concept, though. Johnson (1998) lists over 50 different equivalence definitions from the social sciences, psychology and related fields that may broadly be classified into interpretive and procedural equivalence. The field of translation studies equally struggles with a multitude of approaches and definitions (Kenny, 1998), which specify, for instance, the rank of equivalence (e.g., word or textual level) or the type of equivalence (denotative, pragmatic, etc.) that can be obtained.
In this session, we will look into what it means to produce equivalent questionnaire translations. Key questions in this regard are: What needs to be kept equivalent and what needs to change in order to produce questionnaire translations that work as intended? What guidance can be given to translators of questionnaires in cross-national studies?
Presenters are invited to cover any of the following topics: (1) equivalence of form vs. equivalence of effect; (2) face-value-equivalence vs. perceived meaning; (3) the role of culture-specific discourse conventions (e.g., directness, politeness; theme-rheme); (4) questionnaire design principles (usually developed on the basis of the English language) and their challenges for translation; (5) challenges for particular language combinations; (6) methods to address equivalence: interplay between statistical assessment and expert judgment, split-ballot, mixed-method, rating tasks (for response scales, for instance), corpus linguistics. Presentations are encouraged to further our knowledge on “changes” in the translation that may be necessary in order to produce translations that pave the way for comparable data.
Paper Details1. Translatability and translation in practice: experiences from the 6th European Working Conditions Survey
Dr Gijs Van Houten (Eurofound)
Dr Milos Kankaras (Eurofound)
The European Working Conditions Survey is a repeated cross-sectional face-to-face survey of workers, which is carried out for the sixth time in 2015, covering 35 countries using 38 languages or language versions.
This paper provides illustrations of the outcomes of the cognitive testing, advance translation, translatability assessment and TRAPD translation process of the EWCS questionnaire and discusses the challenges that had to be overcome to successfully implement the process.
2. Relating translation quality and measurement quality: exploring translation assessment methods
Mrs Diana Zavala Rojas (UPF)
Mrs Brita Dorer (GESIS)
Translation guidelines suggests a committee approach to and to use quality 'controls' at different stages of the translation process. However, it is difficult to measure or give a quantitative indicator for the success of those procedures. This paper studies the impact of translation decisions. We relate an estimate of measurement quality and qualitative indicators of translation decisions and measurement characteristics e.g. the layout of the questionnaire, the linguistic complexity, the properties of the answer scales. The objective is to estimate the effect of the translation decisions included in the estimates of measurement quality.
3. Translation Pre-Testing and Instrument Usability at the United States Census Bureau
Ms Kathleen Kephart (US Census Bureau)
Dr Patricia Goerman (US Census Bureau)
Ms Mikelyn Meyers (US Census Bureau)
The U.S. Census Bureau has had success with concurrent usability and cognitive testing of instruments in English. To test the application of concurrent testing on a translated instrument, we are administering a web survey to Spanish speakers while tracking their eye movements. Multiple cognitive and usability metrics will be analyzed to identify problematic areas of a translated survey. The goal of our research is twofold:
1) To examine challenges that may arise when performing joint cognitive and usability testing on a Spanish immigrant minority population.
2) To explore the possibility of identifying translation issues with eye tracking technology.
4. SPOKEN LANGUAGE VERSUS WRITTEN LANGUAGE: A CHALLENGE FOR THE LINGUISTIC VALIDATION OF DATA COLLECTION INSTRUMENTS FOR INTERNATIONAL SURVEYS
Mr Andrea Ferrari (CAPSTAN)
Mr Steve Dept (CAPSTAN)
When CAPI/CATI systems are used, the interviewer follows a script and reads out the questions to the respondent. The authors examine the questions and challenges that this entails for the linguistic validation of instruments used in cross-national surveys. The validation process needs some re-thinking in the case of materials which will never be seen in written form but only ‘heard’ by respondents. Especially, but not only, in the case of diglossic languages (e.g. Arabic, Swiss-German) and languages spoken by immigrant populations (e.g. Spanish in the US, Russian in Israel).