ESRA logo

ESRA 2023 Program

              



All time references are in CEST

Analyzing interaction between interviewers and sample members to better understand survey participation and data quality

Session Organisers Mrs P. Linh Nguyen (University of Essex / University of Mannheim)
Mrs Yfke Ongena (University of Groningen)
Mr Frederick Conrad (University of Michigan)
TimeTuesday 18 July, 16:00 - 17:00
Room U6-07

Since Charles Cannell and his colleagues introduced behavior coding to the methodology tool chest in the 1960s, survey researchers have examined the details of interaction to understand sample members’ decision to participate in surveys, how respondents develop rapport with interviewers, how respondents’ negotiate question understanding with interviewers, how they exhibit uncertainty about question meaning and how interviewers react, and so on.

This ESRA session continues this line of research. Thus, we welcome papers that investigate interaction in survey invitations and data collection in traditional interview modes (face-to-face and telephone) as well as interaction with the user interface in self-administered, automated modes (web surveys, SMS/text messaging).

Studies of interaction for pretesting questionnaires, improving response rates, identifying the origins of and reducing measurement error are all within scope for this session. We are particularly interested in studies that connect interaction to the Total Survey Error framework.

Keywords: interaction coding, data quality, survey participation

Papers

Linking interviewer effects to problematic interactional behaviors between interviewer and respondent in a multilingual context

Mrs P. Linh Nguyen (University of Essex, University of Mannheim) - Presenting Author
Mr Frederick Conrad (University of Michigan)

In low- and middle-income countries (LMICs), interviewer-administered, face-to-face (F2F) surveys are the main data collection tool. Varying levels of literacy due to the lack of universal education, particularly among rural populations, limit the ability to collect data without the help of an interviewer. In this context, where both survey researchers and respondents are highly dependent on interviewers, the role of the interviewer and interviewer effects affecting data quality is especially critical.
Most LMICs are multilingual leading to the situation that both interviewer and respondent are speaking multiple local languages to a varying degree. As multilingual respondents differ in their proficiency in the survey language, some will exhibit more cognitive processing problems evidenced by audible manifestations of problematic interactional behaviors during the interview (i.e., through seeking for clarification or repetition of the question). Such difficulties will be especially pronounced among respondents with lower education. Faced by comprehension challenges, interviewers might adapt their behavior to ensure that respondents understand the question as it was intended, thus deviating from standardized interviewing in the predetermined interview language. Such problematic behaviors by interviewers include questioning, giving clarification and/or probing in a non-scripted language, and/or translating non-conform answers into the survey language.
Using the interactional analysis of the recordings of ten selected questions in a survey on financial behavior and attitudes in Zambia on a sample of ca. 850 interviews in two local languages (Bemba and Chewa), we analyze the relationship between 6 indicators of problematic interactional behaviors and interviewer effects.


Using Indicators of Respondent Behavior to Predict Interviewer Quality Concerns

Ms Elizabeth Ohryn (University of Michigan, Institute for Social Research, Survey Research Center) - Presenting Author
Ms Sarah Crane (University of Michigan, Institute for Social Research, Survey Research Center)

There is established evidence in the literature that respondent behavior during a computer assisted telephone interview may influence data quality. The SRO Quality Control team analyzed post-interview interviewer observations from a nationally representative panel study in conjunction with data quality scores from recorded completed interviews in an effort to gauge whether we could use indicators of respondent behavior to predict interviewer quality concerns. We used the outcome of that analysis to build a framework for additional behavior coding by members of the quality control team to use after reviewing recordings of completed interviews through the evaluation process. This session will present the results from both efforts, including a comparison between the behavioral coding done subjectively (interviewer) and objectively (evaluator) on the same cases.


Spontaneous interviewer actions in interviews on trust in different institutions and neighbors

Dr Yfke Ongena (University of Groningen) - Presenting Author
Mrs Emily van der Linde (University of Groningen)

Battery questions can be somewhat boring and therefore burdensome in interviews. In an
attempt to reduce the burden on the respondents’ end in standardized interviews, interviewers
commonly deviate from their script. Exploratory qualitative analysis of interviewer respondent
interactions of a Zambian survey showed a particular instance where interviewers attempt to
clarify the respondents’ response task. For an attitude item on a scale from 0 to 10, interviewers
added the explanation that ‘5’ is the middle category. Pointing out a middle option puts more
emphasis on this response option, resulting in a larger endorsement of ‘5’. In this study we aim
to explore sequential patterns in interviewer actions. Looking at three different interviewer
deviations, i.e., invalid question reading (i.e., major changes in question reading), mismatch
question reading (i.e., major changes in reading of alternatives) and suggestive question
reading, we find that mismatch and invalid question reading are more often spontaneous, i.e.,
not preceded by respondents’ utterances than suggestive question reading. For mismatch and
invalid questions, there was no specific respondent utterance (request, answer, comment,
report and perceptions) that preceded such a misreading more often than expected. Suggestive
questions were more often preceded by reports, i.e., indirect answers from which a direct
answer can be derived.