Conference Programme 2015
Tuesday 14th July Wednesday 15th July Thursday 16th July Friday 17th July
Thursday 16th July, 09:00 - 10:30 Room: O-206
Analysis of Cognitive Interview Data 1
|Convenor||Dr Kristen Miller (National Center for Health Statistics )|
|Coordinator 1||Kristen Miller|
|Coordinator 2||Gordon Willis (National Cancer Institute)|
Session DetailsWhile researchers have analyzed cognitive interviews in a variety of ways, there has been little discussion regarding the process of analysis in cognitive interview literature. That is, there has been little explanation as to how cognitive interviews should be examined and studied to produce reputable findings. In the past year, however, this void has begun to be addressed by publications presenting various analytic techniques for producing cognitive interview findings. Additionally, it has just recently been recognized that cognitive interviewing studies can serve multiple functions toward understanding the performance of a survey question. As traditionally understood, cognitive interviewing studies can identify various difficulties that respondents may experience when attempting to answer a survey question. Cognitive interviewing studies may also examine construct validity in that they can identify the content or experiences that respondents consider and ultimately include in their answer. Finally, cognitive interviewing studies can examine issues of comparability, for example, the accuracy of translations or equivalence across socio-cultural groups. The type of analytic processes employed within a cognitive interviewing study guides the types of conclusions that can be made. This session will focus specifically on issues related to the analysis of cognitive interviews. Topics include analytic techniques, specific methods for addressing study goals (e.g. accuracy of translations and cross-cultural comparability), practices to support study transparency and believability, and ways of assessing and addressing varying levels of data quality,
Paper Details1. Overview: Analysis of the Cognitive Interview in Questionnaire Design
Dr Gordon Willis (National Cancer Institute, NIH)
For this presentation, I will present the major themes from the book “Analysis of the Cognitive Interview in Questionnaire Design” (Willis, 2015), especially involving the way in which analysis strategy depends on (a) the investigator’s underlying objective; (b) approaches to data coding, and (c) compilation and data reduction. I will also discuss the issues of report writing, based on the comprehensive Cognitive Interviewing Reporting Framework introduced by Boeije and Willis (2013), and report dissemination via the Q-Bank database of cognitive testing reports. The presentation will serve as an introduction to the session papers to follow.
2. I don’t believe it! How credible are your cognitive interview findings?
Ms Jo D'ardenne (NatCen Social Research)
Ms Debbie Collins (NatCen Social Research)
Cognitive interview data can be seen as qualitative in nature, being verbal accounts of individual participant’s thought processes, understandings of the survey response task presented, and the factors that shape their responses. A systematic, transparent approach to analysis is vital, in our view, when it comes to interpreting the data and answering the study’s research question(s). But what does such an approach look like in practice and does using such an approach make your findings credible? In this paper we address these questions and present examples from our own work, illustrating the interaction between study design, analysis
3. A comparison of pretest recommendations based on cognitive interviews
Mrs Wanda Otto (GESIS)
Cognitive interviewing is an active pretesting method to recognize which problems exist within the answering process and to develop item revisions for more accurate questions. Unfortunately, little is known about the realized benefits from these revisions.
Therefore, recommendations based on three different item sets (“role of women”, “internationalization”, “role of fathers”) are examined with a quota sample (n~15-20). Then results of the original and the revised items are compared to find out which version is better. The criterion for evaluation is initially the extent of problems, which are analyzed with an error coding scheme taken from DeMaio and