ESRA logo
Tuesday 14th July      Wednesday 15th July      Thursday 16th July      Friday 17th July     

Thursday 16th July, 14:00 - 15:30 Room: O-101

Assessing and addressing measurement equivalence in cross-cultural surveys 3

Convenor Dr Gijs Van Houten (Eurofound )
Coordinator 1Dr Milos Kankaras (Eurofound)

Session Details

Over the past decades the number of cross-cultural surveys has increased dramatically. A major challenge in cross-cultural surveys is to ensure that the answers of different respondents to survey items measure the same concepts. If measurement equivalence is not achieved it is difficult if not impossible to make meaningful comparisons across cultures and countries.

Most cross-cultural surveys aim to reduce bias by finding the right balance between harmonisation and local adaptation of the methods used in each of the stages of the surveys process (e.g. sampling, questionnaire development and translation, fieldwork implementation etc.). Furthermore, an increasing number of research projects are being carried out looking into the determinants measurement equivalence. There are three main approaches to the analysis of measurement equivalence – multigroup confirmatory factor analysis, differential item functioning, and multigroup latent class analysis. These latent variable models are based on different modelling assumptions and are appropriate for different types of data (cf. Kankaraš and Moors, 2010).

This session invites papers about the assessment of measurement equivalence in cross-cultural surveys as well as papers about efforts made to address measurement equivalence in the design and implementation of surveys. The aim is to facilitate an exchange that benefits both the future analysis of measurement equivalence and the future design of cross-national surveys.

Kankaraš, M. & Moors, G.B.D (2010). Researching measurement equivalence in cross-cultural studies. Psyhologija, 43(2) ,121-136

Paper Details

1. Multigroup-PCA and –PLS: new methods for assessing the structural invariance of a scale. The example of the CAST (Cannabis abuse screening test) in 13 countries
Mr Stéphane Legleye (INED)
Miss Aida Eslami (ANSES)
Miss Stéphanie Bougeard (ANSES)

The first step of the assessment of the invariance of a test is its structural (configural) invariance across groups (countries, schools, etc.). We present two methods aiming at describing the internal structure of a scale and explaining group-specific deviations by other variables: multigroup principal component analysis (mgPCA) and multigroup partial least square (mgPLS). Both methods provide intuitive graphics and similarities indexes of each group to the common structure. The R package multigroup has been developed by Eslami, Bougeard and colleagues. Application is shown for the Cannabis abuse screening test (CAST) administered in schools of 13 countries in 2011

2. Cross-Cultural Equivalence of Survey Response Latencies
Professor Timothy Johnson (University of Illinois at Chicago)
Professor Allyson Holbrook (University of Illinois at Chicago)
Miss Marina Stavrakantonaki (University of Illinois at Chicago)

There is little empirical information regarding cultural variability in response latency patterns and the degree to which these provide comparable information or have comparable meaning across cultures. Using data from a survey of Chicago adults, we examine associations between respondent cultural background, measured both via race/ethnicity and language preference, and response latencies to survey questions intentionally designed to introduce information processing difficulties. (Similar analyses that compare responses to survey items that are not designed to introduce processing difficulties are also examined.) Findings are discussed as they relate to the usefulness of response latencies for understanding cultural variability in survey

3. Issues in multilingual cross-cultural scales: Applying the AICS to Arabs and Jews in Hebrew and Arabic
Dr Boaz Shulruf (University of New South Wales)

Collectivism and Individualism are two constructs commonly used in cross-cultural research. However, when measuring these constructs across populations it is important to verify that the measure of cultural attributes is not biased by different understanding of the questionnaire due to language or different perceptions of the questions. The study compared responses to Auckland Collectivism Individualism Scale across Jews and Arabs in Israel which were given in Hebrew and Arabic. The findings suggest that the understanding of some questions may vary across populations/languages. Implications for multilingual cross-cultural research are discussed.

4. Comparing Survey Data Quality from Native and non-Native English Speakers
Ms Annie Pettit (Peanut Labs)

Researchers often apply standard techniques to identify which survey takers provide good data and which are not paying attention and simply giving random answers. unfortunately, red herrings, click counts, verbatim ratings, and contradiction measurements all require high level language skills. And, in North America, most data quality techniques assume that people taking survey are fluent English speakers. This presentation will demonstrate the types of data quality questions that are better suited for differentiating between poor quality survey responders and people for whom English is simply not their first language.

5. ‘Trust in physicians’ or ‘trust in physician’? Testing measurement invariance of trust in physicians in different (health care) cultures.
Ms Mira Hassan (Research assistant)

The ISSP 2011 allows us to study trust in physicians across countries. However the items conducted ask for trust and behavioral attitudes towards physicians in general. Considering different health care systems and different health care cultures the underlying assumption that people really can say something about physicians in general does not apply to all countries. Thus there may be a semantical issue with the measurement used. By using most dissimilar case design measurement invariance of the trust in physician measurements will be tested. The idea is to find out whether ‘trust in physicians’ is applicable across different (health care) cultures.