Assessing and addressing measurement equivalence in cross-cultural surveys 3
|Convenor||Dr Gijs Van Houten (Eurofound )|
|Coordinator 1||Dr Milos Kankaras (Eurofound)|
The first step of the assessment of the invariance of a test is its structural (configural) invariance across groups (countries, schools, etc.). We present two methods aiming at describing the internal structure of a scale and explaining group-specific deviations by other variables: multigroup principal component analysis (mgPCA) and multigroup partial least square (mgPLS). Both methods provide intuitive graphics and similarities indexes of each group to the common structure. The R package multigroup has been developed by Eslami, Bougeard and colleagues. Application is shown for the Cannabis abuse screening test (CAST) administered in schools of 13 countries in 2011
There is little empirical information regarding cultural variability in response latency patterns and the degree to which these provide comparable information or have comparable meaning across cultures. Using data from a survey of Chicago adults, we examine associations between respondent cultural background, measured both via race/ethnicity and language preference, and response latencies to survey questions intentionally designed to introduce information processing difficulties. (Similar analyses that compare responses to survey items that are not designed to introduce processing difficulties are also examined.) Findings are discussed as they relate to the usefulness of response latencies for understanding cultural variability in survey
Collectivism and Individualism are two constructs commonly used in cross-cultural research. However, when measuring these constructs across populations it is important to verify that the measure of cultural attributes is not biased by different understanding of the questionnaire due to language or different perceptions of the questions. The study compared responses to Auckland Collectivism Individualism Scale across Jews and Arabs in Israel which were given in Hebrew and Arabic. The findings suggest that the understanding of some questions may vary across populations/languages. Implications for multilingual cross-cultural research are discussed.
Researchers often apply standard techniques to identify which survey takers provide good data and which are not paying attention and simply giving random answers. unfortunately, red herrings, click counts, verbatim ratings, and contradiction measurements all require high level language skills. And, in North America, most data quality techniques assume that people taking survey are fluent English speakers. This presentation will demonstrate the types of data quality questions that are better suited for differentiating between poor quality survey responders and people for whom English is simply not their first language.
The ISSP 2011 allows us to study trust in physicians across countries. However the items conducted ask for trust and behavioral attitudes towards physicians in general. Considering different health care systems and different health care cultures the underlying assumption that people really can say something about physicians in general does not apply to all countries. Thus there may be a semantical issue with the measurement used. By using most dissimilar case design measurement invariance of the trust in physician measurements will be tested. The idea is to find out whether ‘trust in physicians’ is applicable across different (health care) cultures.