ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 11:00 - 12:30 Room: N AUD5


The role of interviewers in personal interviews and test situations

Chair Ms Manja Attig (Leibniz Institute for Educational Trajectories )
Coordinator 1Mr Felix Benjamin Grobe (Leibniz Institute for Educational Trajectories)
Coordinator 2Ms Claudia Karwath (Leibniz Institute for Educational Trajectories)

Session Details

This session deals with different kinds of interviewer effects in personal interviews and test situations.

Interviewers can affect the whole process of data collection and data quality in a positive or negative way, for example their sociodemographic background (e.g. sex, age, education), their behavior (e.g. interaction process on the doorstep), their attitude (e.g. while asking questions) as well as their experience as interviewer (e.g. while handling refusal or sensitive contents) can have an impact. Further, these factors are linked to and influence each other.
In addition, interviewers can be subject to other structural influences (e.g. payment, burden, continuity), which can affect the process of data collection and data quality as well. For these reasons it is important to investigate the effects of interviewers more closely.

Researchers are invited to submit abstracts focusing on the above mentioned research questions. Further, work concentrating on analyzes with longitudinal data, e.g. the German National Educational Panel Study, the Early Childhood Longitudinal Study or the 1970 British Cohort Study, are welcome.
We are interested in theoretical as well as empirical studies considering the influence of interviewers on data outcomes.

Paper Details

1. Adapting Clinical Protocols for Survey Research Administration: Implications for Interviewer Training and Data Quality
Ms Eva Leissou (University of Michigan, Survey research Center, Survey Research Operations)
Ms Lindsay Ryan (University of Michigan, Survey Research Center, Health and Retirement Study)
Ms Donnalee Grey-Farquharson (University of Michigan, Survey research Center, Survey Research Operations)

The Health and Retirement Study (HRS) has been collecting data on the economic and physical well-being of men and women over 50 years of age in America since 1992. The 2016 Healthy Cognitive Aging Project (HCAP) is an additional assessment of the HRS sample aged 65 and up, using a battery of cognitive assessments administered by survey trained interviewers via computer-assisted interviewing protocols. The HCAP design is adapted from the HRS Aging, Demographics and Memory Study (ADAMS), a lengthy assessment of cognitive impairment and dementia administered by clinical technicians with clinical diagnoses decided by a clinical consensus panel. The HCAP protocol mimics clinical administrations of cognitive tests by scoring most tests on the spot by survey interviewers, however no clinical diagnoses are made. Rather, the HCAP assessment produces a research diagnosis meant to provide population data on impairment rather than individual diagnoses. To successfully administer the HCAP protocol, survey interviewers must be able to deal with complex test administration protocols, keep non-traditional survey materials organized, and in some cases follow along with the cognitive tests to note incorrect responses. For example, they need to concurrently keep track of responses, time the length of the respondent’s answers, while also doing data entry. One of the project’s main priorities was developing a cognitive test protocol that could be successfully administered by non-clinical interviewers.
The focus of this paper is to describe the process we underwent to develop and refine the study protocol (cognitive test administration) and interviewer training methodologies. More specifically we will discuss how we trained interviewers to adhere to standard survey methods based on clinical assessments while also using personal judgement whenever the protocol required it. We will discuss quality assurance techniques used post-training to evaluate and refine further the methods and also to calibrate interviewer’s skills. Finally, we will discuss how we measured the effectiveness of our protocol by comparing data from clinician assessments with the ones produced by the survey interviewers. (MADC and Seattle pretests)


2. Let's ask interviewers! (Case: ESS Slovenia)
Mrs Ziva Broder (University of Ljubljana, Faculty of Social Sciences)
Mrs Rebeka Falle Zorman (University of Ljubljana, Faculty of Social Sciences)

Researchers, who have been dealing with the implementation of the field phase of the survey over a longer period of time, are more and more aware of how important is the role of the interviewer. Notwithstanding the importance of the research, strict methodological rules, possible incentives, carefully drawn up standardized advanced letters, extensive interviewer trainings, continuous assistance by phone or email, etc, in the end, the interviewer is the one who will eventually go to the field, make personal contact with respondents and convince them (or not) to participate in the survey.

Slovenia is a member of European Social Survey (ESS) team from the first wave, i.e. since 2002 (the authors have been working as field directors from the beginning). Since then we almost completely changed the interviewer management, and mainly switched from students to professional interviewers. The difference is obvious, mostly reflected in higher response rates and in the quality of obtained data. “Ideal interviewer” in Slovenia is a woman (although we do have some very good male colleagues), middle aged, with family and other working obligations. Students in recent years, unfortunately, proved to be unreliable in terms of seeing surveys as “easy money”, while the professionals are well aware that their existence depends on the quality of their work.

In the last two ESS waves, we paid particular attention to the interviewers and therefore sent them an online questionnaire through which we were able to obtain their opinions and consequently a good insight into the situation on the field. Furthermore, through interviewers’ opinions we were able to figure out what actually works and what does not work in real life situations. Such firsthand information are of outstanding value for us and help us to prepare for interviewer trainings for the future waves, to provide interviewers with some “hints and tips” how to convince respondents to participate.

In the last part of the questionnaire interviewers assess the importance of specific approaches to convince respondents to participate. It seems rather surprising, that in both of our 2016 surveys (Slovenian Public Opinion and ESS) the lowest rates got incentives, although they were described as a very welcome change by both, interviewers and participants. Interviewers as the most important approaches indicate: kindness, courtesy, personal contact with respondents, positivity and flexibility. They also believe that key elements in convincing the respondent to participate are unobtrusiveness, sincere words of appreciation, persistence and a good understanding of the purpose of the research.

In this paper we will try to determine interviewer’s basic characteristics (gender, age, education) that affect response rate. We will check whether the experience is crucial for the quality of work and high response rate. We will compare interviewers, who at a given moment participate in one or more projects. At the same time, we will particularly highlight the tactics of persuasion, which are marked as very important and compare their importance according to general characteristics of the interviewers.


3. Do interviewers affect test situations in households? Results from the newborn cohort study of the National Educational Panel Study (NEPS).
Miss Claudia Karwath (Leibniz Institute for Educational Trajectories)
Dr Manja Attig (Leibniz Institute for Educational Trajectories)

Although interviewer effects are well researched and discussed in different fields of survey processes (e.g., motivating and contacting participants, influencing unit and item non-response), no findings about interviewer effects on test situations are known. However, interviewers are also involved in test situations, which can be a bigger challenge than interviewing participants and therefore also can influence collected data. Because there are no known findings in this field, this presentation deals with possible interviewer effects during the test situation of sensorimotor tasks in the first wave of the newborn cohort study of the National Educational Panel Study (NEPS).

In the newborn cohort study, interviewers are responsible for contacting and motivating the participants, collecting data during a computer assisted personal interview (CAPI) with the parents as well as carrying out the videotaped infant assessments (e.g., sensorimotor tasks). However, because of the young age of the children (around 7 months) and the “household situation” (e.g., siblings, pets), the interviewers are exposed to a special challenge: the interviewers have to perform the whole interview situation in a standardized way and in accordance with the implementation rules as well as they need to deal with the “household situation”. Moreover, for infant assessments, which are typically used in a lab setting, the “household situation” can complicate the performance and consequently can affect the interviewers as well as the collected data. Due to these special challenges for interviewers, the following analyses concentrate on the influence of interviewers on the test situation and the behavior of the infants in the sensorimotor tasks. The analyses focus on the amount of mistakes of the interviewers during the performance of the sensorimotor tasks as well as on the status of the sensorimotor development of the infants (coded from the sensorimotor tasks) as dependent variable. As independent variables the age of the interviewers (mostly 50 to 65 years), their education (mostly upper secondary education), their experience as an interviewer in the same survey institute (mostly up to two years experience) and the amount of realized interviews (in average 33 interviews) are used. Additionally, the tiredness of the infants is considered.

Concentrating on the interviewer characteristics, results show that interviewers with a higher age make more mistakes when they perform the sensorimotor tasks, whereas interviewers with a higher amount of realized interviewers show less problems. Further, interviewers who had to deal with a tired infant tended to make more mistakes. The status of the sensorimotor development as dependent variable shows no significant effects for the age of the interviewers. Again, a positive effect for the amount of realized interviews is found. Furthermore, infants being tired show a less well performance in the sensorimotor tasks.

The analyses show an effect of interviewers for the sensorimotor tasks on both the performance of the interviewers and the infants. Although most of these effects are quite small and explain only a part of the total variance, possible interviewer effects should be regard and counteract (e.g., with interviewer trainings) if interviewers are used.