ESRA 2019 Draft Programme at a Glance
Ensuring Validity and Measurement Equivalence through Questionnaire Design and Cognitive Pretesting Techniques 2
|Session Organisers|| Dr Natalja Menold (GESIS)
Mr Peyton M Craighill (Office of Opinion Research | U.S. Department of State)
Ms Patricia Hadler (GESIS)
Ms Aneta G. Guenova (Office of Opinion Research | U.S. Department of State)
Dr Cornelia Neuert (GESIS)
Dr Patricia Goerman (U.S. Census Bureau)
|Time||Friday 19th July, 11:00 - 12:30|
According to the framework of the total survey error, validity refers to the degree to which survey results can be interpreted with respect to the concepts under investigation, i.e. certain opinions, behaviors, abilities, and competencies. Validity also regards interpretations about the differences or changes in the searched concepts, such as comparisons between different respondent groups, across time or cross-culturally. Survey researchers conduct studies in various languages and cultures within one country or in various countries and gather demographic, administrative and social data in these multi-cultural contexts, constantly trying to improve the accuracy of these measurements. The comparative aspects of validity have been referred to as measurement equivalence issues. Many researchers address measurement equivalence during data analysis, after data collection, often finding that there is a lack of measurement invariance. However, the sources of measurement invariance are more likely to be associated with questionnaire design and data collection processing.
Difficult questions, overloaded instructions or visual design elements can affect validity and measurement equivalence. This session aims to discuss methods of developing measurement instruments and their effect on validity and measurement equivalence. The goal is to better understand the corresponding sources of measurement error and to present methods which help to increase validity in comparative research. In particular, generic, multi-method approaches are of interest. Such approaches can include expert reviews by subject matter experts, cognitive interviews and pilot interviews with respondents who represent the main demographic groups of the target countries. In addition, quantitative analyses of findings, e.g. from experiments related to the use of different versions of questionnaires can help to evaluate the sources of decreased validity and deficient measurement equivalence.
Keywords: Questionnaire Design, Cognitive interviewing, Validity, Measurement Equivalence, Pretesting, Question Evaluation
Improving cross-cultural measurement invariance during the piloting of questionnaires
Dr Natalja Menold (GESIS) - Presenting Author
Mrs Patricia Hadler (GESIS)
Dr Cornelia Neuert (GESIS)
Mrs Verena Ortmanns (GESIS)
Measurement equivalence of data means that comparability is not biased by group membership of respondents. Researchers addressing measurement equivalence of survey data in cross-cultural research have often found that exact measurement invariance can hardly be reached. A line of research has therefore been concerned with more liberal methods to statistically test measurement equivalence. However, if the questionnaires are not comparable due to differences in question comprehension and responses provided by survey participants, wrong conclusions with respect to comparability can be drawn when using ex-post methods of data analysis only. We, therefore, address the question of how comparability of questionnaires can be ensured prior to data collection to avoid wrong conclusions afterward. Our research is conducted in German and in American English. First, we test and revise the questions based on the findings of three pretesting methods: 1) cognitive interviews, 2) evaluation by questionnaire design experts and 3) web probing. For our research purpose, we use questions for which the exact measurement equivalence (cross-cultural comparability) was not given in the available data as well as instruments of which measurement invariance properties were not previously known. After piloting the instruments and revising them on the basis of the findings of the three methods we collect quantitative data for the original and the improved instruments. The different instrument versions are then compared with regard to measurement invariance across countries. The results are discussed with respect to the implications for future research and practice in cross-cultural research.
Pre-testing of the Questionnaire in the Context of Panel Study: Analysis of Polish Panel Survey POLPAN Pre-test Results
Ms Danuta Zyczynska-Ciolek (Institute of Philosophy and Sociology, Polish Academy of Sciences)
Ms Weronika Boruc (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
The paper discusses the role of pre-testing in consecutive waves of a panel survey investigating how attitudes and opinions change over long periods of time. On the one hand, accounting for this change requires that the phrasing of questions should remain unaltered in order to maintain comparability. On the other, pre-testing may reveal that respondents experience difficulties in understanding some items due to the possible shifts in the meaning that occurred over time. This fact can significantly affect measurement equivalence. The paper discusses the dilemma of retaining or changing the questionnaire items, using the results of the pre-testing of the Polish Panel Survey POLPAN 1988–2018, conducted in March 2018. Questionnaire items selected for analyses deal with (1) the determinants of life success, (2) the intensity of social-group conflicts, and (3) the self-assessment of social position. We analyze the types of problems that emerged during pre-testing and present examples of the concepts which meaning has evolved because of differences in the social and political context e.g. before and after 1989. We conclude that the pre-test results might not only lead to modifications in the questionnaire items and improvements in fieldwork instructions for interviewers, but they should be also taken into account in the interpretation of the main-survey results.
The value of cognitive and expert interviews in the adaptation of questionnaires for migrant communities: insights from health research
Professor Patrick Brzoska (Witten/Herdecke University, Faculty of Health, School of Medicine) - Presenting Author
Introduction: Obtaining survey data on migrants involves different challenges. One of these concerns language. Given their oftentimes limited proficiency of the host country’s language, surveys on migrants usually need to be conducted in their mother tongue. Available translation of questionnaires, however, may be difficult to understand for migrants because their use of language may be syntactically and lexically different from the use of the same language spoken in their countries of origin. Consequently, questionnaires need to be re-adapted to achieve functional equivalence. Using the assessment of illness and medication beliefs in Turkish migrants in Germany as an example, this study illustrates how cognitive and expert interviews may serve this purpose.
Methods: The study examines the comprehensibility of the Turkish versions of the Illness Perceptions and the Beliefs about Medicines Questionnaire in a sample of Turkish migrants in Germany. 15 patients were surveyed through cognitive interviews using a think-aloud approach. Additionally, interviews with experts were conducted who were experienced in research with this population group. The interviews focused on the clarity of items, potentially ambiguous wordings, the appropriateness of the language style as well as on suggestions for improvements.
Results: The interviews showed that several of the items of both Turkish-language questionnaires were misunderstood by Turkish migrants because of a complex and ambiguous item wording. Furthermore, confusion existed over the (apparent) similarity of certain items. Also experts identified items that they considered as difficult to understand for this population because of formal wording.
Discussion: Questionnaires developed for native populations (such as Turks in Turkey) may be difficult do understand for migrants. This may be attributable to a diverging development of language over time. These language differences need to be considered thorough testing and re–adaptation when research instruments are to be used across both populations. Qualitative interviews may support this process.
Doorstep interactions: Motivators and challenges to Increase Survey Participation in Seven Languages
Dr Yazmin Garcia Trejo (U.S. Census Bureau) - Presenting Author
Dr Patricia Goerman (U.S. Census Bureau)
Dr Jiyoung Son (Independent researcher)
Dr Alisu Schoua-Glusberg (Research Support Services Inc.)
Would you answer a census if an interviewer knocks at your door? The U.S. Census prioritizes self-response of census questionnaires. However, when a given household does not respond, a massive follow-up operation ensues. During this stage, interviewers come to the households that did not submit their census forms and seek to conduct the census in person. However, in person interviews are challenging. Interviewers need to build rapport, and often navigate language barriers and cultural norms to persuade people to participate. In this paper we focus on analyzing face-to-face messaging that enumerators can use at the doorstep. In particular, we examine messages designed to encourage non-English populations in the U.S context. In 2017 we collected data from 42 focus groups across seven different languages (Arabic, Chinese, English, Korean, Russian, Spanish and Vietnamese). The participants provided feedback about the messages and any concerns they had with Census participation. We crafted videos of four different doorstep interactions based on some characteristics identified in 2010 for mindsets of people with a low intent to participate in the Census. The videos covered four hypothetical situations between a census interviewer and a respondent: (1) unaware about the census; (2) fear/mistrust of government; (3) low engagement and; (4) language barriers. The videos showed interviewers presenting a variety of messages to respondents, including the purpose of the interviewer’s visit, how participation can benefit the respondent’s community, confidentiality, and the mandatory nature of the Census. For this paper we analyze the focus group data to identify positive, neutral and negative messages that people discussed during the focus groups related to their census participation. We expect this research will provide evidence to include in training materials for bilingual interviewers who speak the languages in question. Moreover, this research can also provide a baseline of effective types of tailored messages for