ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Interviewers and measurement quality 1

Session Organiser Dr Vera Toepoel (Statistics Netherlands)
TimeThursday 20 July, 14:00 - 15:30
Room U6-01f

Many factors – personal, design, environmental – shape interviewer behavior and can impact measurement quality. These studies examine this complexity through various lenses. The first considers the shortage of qualified field staff for in-person survey data collection in the US as a result of technological advances and pandemic-driven work-life balance changes. Comparisons are made between US and European experiences and recommendations are provided for adapting recruitment practices and alleviating workforce demands. The second study considers the effect of mixed-mode designs on interviewer variance in data collection. The results suggest that interviewer variance may be larger in face-to-face mode for sensitive items, while larger in telephone mode for non-sensitive items. Third, the European Social Survey (ESS) seeks to minimize undesirable interviewer behavior (UIB) and monitor interviewer behavior to ensure data quality. A holistic approach is used to prevent, detect, and assess interviewer-related issues and minimize UIB. The fourth investigates the effect of interviewer's age on respondents' values in the ESS from 30 countries. Results show that older interviewers primed conservative values and reduced openness to change. Fifth, gamification has been shown to improve Field Interviewer experience, motivation and loyalty in survey research. The paper discusses the effectiveness of multiple production-driven gamification programs in a nationwide survey and their impact on Field Interviewer behavior and resulting respondent engagement. The final study investigates response styles (straight-lining, middle-, extreme-responding, and acquiescence) in face-to-face surveys using data from the Portrait Values Questionnaire in the ESS in 18 countries. Results showed significant effects of respondent-interviewer gender and age matching, while interview length was also related to the presence of response styles. The study recommends using interviewer-collected data and further studies on response styles in face-to-face surveys.

Keywords: interviewer behavior, measurement quality

Papers

Effective ways to detect and minimize undesirable interviewer behaviour in a decentralized cross national comparative survey. Findings from the Europeans Social Survey R9 and R10.

Dr Paulette Flore (SCP/ESS) - Presenting Author
Mr Roberto Briceno-Rosas (Gesis/ESS)
Dr Joost Kappelhof (SCP/ESS)

Interviewers can affect both the measurement and the representation dimension of the Total Survey Error framework (TSE, Groves et al.2009). However, undesirable interviewer behavior (UIB) can not only affect the accuracy of estimates, but UIB can also affect the comparability of estimates in case of multinational, multiregional or multicultural (3MC) surveys. Therefore, detecting and reducing UIB becomes even more urgent for an interviewer-assisted survey in a 3MC context, especially when it concerns large scale face-to-face surveys employing many interviewers at the same time.

The European Social Survey (ESS) aims to measuring attitudes, beliefs and behavior patterns in a changing world and has been conducted as a bi-annual cross-national face-to-face survey since 2001. In order to discourage UIB and keep its adverse effects on data quality to a minimum, the ESS developed a framework aimed to tackle this issue using a holistic approach with respect to interviewer behaviour and their involvement in the survey life cycle. This approach allows the ESS to prevent, detect and assess interviewer related issues in real time and post data collection, which affect the ESS data quality.

We will discuss the ESS approach to minimizing UIB and monitoring interviewer behaviour in a timely and comparable way. We will present results of this approach using the post hoc assessment of the R9 data release as well as results from the interim data set analysis of R10.


What can interviewer -collected paradata tell about measurement quality in face-to-face surveys? Analyzing response styles in the 21-item version of Schwartz’s Portrait Values Questionnaire based on the European Social Survey, 2008 - 2018

Dr Marek Muszyński (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
Dr Piotr Jabkowski (Faculty of Sociology; Adam Mickiewicz University, Poznan)


Response styles, defined as the tendency to choose the response option on a basis other than the content of a question, is a response bias that can seriously threaten the quality of surveys. Although response styles are present in both face-to-face and self-completion surveys, they have been far more studied in the latter mode of data collection. Our study aimed to fill this gap and investigate four response styles (straightlining, middle-, extreme-responding, and acquiescence) in face-to-face surveys. Specifically, we focus on the presence of response styles in the 21-item version of Schwartz’s Portrait Values Questionnaire based on large data from six waves of the European Social Survey (2008-2018) covering 18 countries participating in all consecutive rounds of the project. We made use of three complementary types of ESS datasets by combining: (1) standard cumulative survey results with (2) data from “interviewers’ questionnaires,” which provided interviewers’ observations on the context of the interview, and (3) data from “contact forms” recording the timing, mode, and outcome of each contact attempt. We identified main response styles covariates, concentrating on interviewer-collected survey paradata describing respondent characteristics, interview contexts, survey length, and the interviewer-respondent sociodemographic match.
The results of multi-level regressions (with respondents nested within interviewers, countries, and ESS rounds) pointed to a non-negligible role of respondent-interviewer gender and age matching for the presence of response styles. In turn, much lower effects were obtained regarding respondents’ levels of cooperation before and during an interview. At the same time, interview length was significantly related to the presence of response styles, with faster interviews associated with lower data quality. We concluded with a recommendation for using interviewer-collected paradata and further studies on response styles in face-to-face surveys.


Adapting Field Staff Recruiting Efforts in a Post-Pandemic World

Mr Rick Dulaney (Westat) - Presenting Author
Dr Jennifer Kelley (Westat)
Dr Jill Carle (Westat)
Ms Tammy Cook (Westat)
Mr Brad Edwards (Westat)

Quality interviewers are critical to successful in-person survey data collection. However, labor market shifts, exacerbated by the pandemic, have made field staff recruitment and retention increasingly challenging. In the U.S. context, technological advances and a pandemic-invigorated emphasis on work-life balance have catalyzed a seismic shift in where and how work is done, making it more difficult to find applicants with the right balance of interpersonal and technical skills. These broader labor market changes impact recruitment and retention, further straining data collection and project budgets, and potentially decreasing data quality. Some suggest the field survey labor force shortage is an existential threat to the in-person data collection mode, or even to all interviewer-mediated surveys.
This presentation will compare the recent U.S. field labor experience with conditions in Europe, drawing from interviews with leading European survey organizations to understand the scope of current challenges in recruitment and retention of qualified data collectors. Our assessment of U.S. conditions is informed by reports from a series of panel sessions with representation from most of the largest survey data collection organizations in the U.S., and a deep dive into Westat's experience on 8 major projects over the past decade. We will highlight recommendations for adapting recruitment practices in a post-pandemic labor market across survey contexts. We will also review ways to alleviate CAPI workforce demands (e.g., multimode alternatives; updating value propositions for respondents).


Are interviewer variances equal across modes in mixed-mode studies?

Ms Wenshan Yu (University of Michigan) - Presenting Author
Professor Michael Elliott (University of Michigan)
Professor Trivellore Raghunathan (University of Michigan)

As mixed-mode designs become increasingly popular, their effects on data quality have attracted much scholarly attention. Most studies focused on the bias properties of mixed-mode designs; however, few of them have investigated whether mixed-mode designs have heterogeneous variance structures across modes. While many factors can contribute to the interviewer variance component, this study investigates whether interviewer variances are equal across modes in mixed-mode studies. We use data collected with two designs to answer the research question. In the first design, when interviewers are responsible for either face-to-face or telephone mode, we examine whether there are mode differences in interviewer variance for 1) sensitive political questions, 2) international attitudes, 3) and item missing indicators, using the Arab Barometer wave 6 Jordan data with a randomized mixed-mode design. In the second design, we draw on Health and Retirement Study (HRS) 2016 core survey data to examine the question on three topics when interviewers are responsible for both modes. The topics cover 1) the CESD depression scale, 2) interviewer observations, and 3) the physical activity scale. To account for the lack of interpenetrated designs in both data sources, we include propensities of responding in the FTF mode conditional on various demographic variables in our models. Given the small power of this study, we find significant differences in interviewer variances on one item (twelve items in total) in the Arab Barometer study; whereas for HRS, the results are three out of seventeen. Overall, we find the magnitude of the interviewer variances larger in FTF than TEL on sensitive items. However, for interviewer observation and non-sensitive items, the pattern is reversed. The analytical strategy applied in this study may serve as a tool for future interviewer monitoring in mixed-mode studies.


The Gamification of Field Interviewer Production

Mrs Tammy Cook (Westat)
Ms Victoria Vignare (Westat) - Presenting Author

Gamification has been shown to improve user experience, boost morale, and increase participant loyalty (Richter, Raban, and Rafaeli, 2015). It has been increasingly applied to a variety of fields like marketing, education and survey research (Puleston, 2011) and its use was further accelerated when the pandemic forced everyone to migrate to a remote environment. In 2022, the United States, like much of the world, faced the continuing effects of the COVID-19 pandemic, widespread labor shortages, and changing attitudes towards the balance of work-life benefits. These challenges effect Field Interviewer recruiting, training, staffing, production, and attrition. The employment environment in our industry generated the need for novel innovations in Field Operations Management to engage and motivate employees. How can gamification be affectively applied to Field Interviewer careers? Especially with tasks that are not enjoyable, like cold calling. Can it increase the data collector reliability? What effect does it have on production? Which elements of visualization are included in a successful program? These questions and others were tested in a large, complex, nationwide, longitudinal survey with hundreds of Field Interviewers. This paper discusses the effectiveness of multiple production-driven gamification programs and their effect on Field Interviewer behavior. Our results reveal several fascinating effects of gamification and concludes with a look at opportunities to expand these strategies to promote study respondent engagement.