ESRA logo

ESRA 2019 glance program


Collecting Self-Report Data Using New(er) One-to-One Communication Modes

Session Organisers Professor Michael Schober (New School for Social Research)
Professor Frederick Conrad (University of Michigan)
Mr Andrew Hupp (University of Michigan)
TimeWednesday 17th July, 16:30 - 17:30
Room D09

People are now using more modes for daily one-to-one communication, and switching across them for different purposes, than ever before: augmenting their in-person face-to-face, phone, and email communication with texting, video chatting (Skype, FaceTime, Google Hangouts, Zoom), speaking with automated dialog systems, and exchanging multimedia content (images, recorded audio or video) through a messaging system (e.g., WhatsApp, Messages, Instagram). Researchers carrying out probability sample surveys have adapted to this rapidly evolving world by optimizing web surveys for mobile devices and improving touchtone IVR surveys, but a number of popular modes have not yet been widely adopted for sample survey recruitment or self-report data collection. This session assembles papers that focus on the potential for using newer popular communication modes for survey data collection (elicited self-report) that have not yet been widely adopted by researchers.

The potential use of such modes for collecting self-report data is likely to raise new challenges in adjusting for not-yet-understood kinds of mode effects (e.g., from synchronous vs. asynchronous modes). Some new modes may also enable special adaptations to promote data quality—for example, increasing privacy when needed by turning off the video feed when a video interview concerns sensitive topics.

Submissions are welcome that report

• Lab or field experiments using one or more not-yet-widely-used-for-research communication modes
• Feasibility or usability studies
• Theoretical and conceptual arguments about the benefits and drawbacks of potentially using particular modes

Papers that present empirical evidence about measurement, coverage and/or non-response error associated with one or more new modes are particularly welcome.

Keywords: survey mode, interview, recruitment, data quality, total survey error, popular communication modes

Efficiency of Interviews on Smartphones: Texting versus Voice

Professor Frederick Conrad (University of Michigan) - Presenting Author
Mr Andrew Hupp (University of Michigan)
Professor Christopher Antoun (University of Maryland)
Ms H. Yanna Yan (University of Michigan)
Professor MIchael Schober (The New School )

Smartphones create new measurement possibilities for social research, e.g., multiple modes on a single device for conducting interviews and passive measurement via native sensors. These new opportunities require rethinking and revising many aspects of conventional practice. For example, the fact that voicemail is native to smartphones means that the concept of “contact” may have to be expanded to include “by voicemail” as opposed to a two-way, confirmed contact. In this paper, we consider how smartphones are changing the landscape for conducting and evaluating survey interviews. Specifically we compare the efficiency of research conducted via text and voice on smartphones (with both automated and human-administered implementations of each mode), complementing a previous comparison of their data quality (Schober et al., 2015) in which text interviewing led to higher response rates, less satisficing, more disclosure, i.e., better quality data, and greater satisfaction with the interview. The current analyses suggest that texting (especially when automated) leads to substantially faster recruitment than voice, reducing the overall field period for text data collection, despite substantially longer text than voice interviews. We attribute the greater effectiveness of text recruiting to the greater persistence of text than voice invitations (no voice mail messages were left) and the relatively large number of notifications triggered by the arrival of a text message. While text interviews last longer than voice interviews it is possible that human interviewers can be more efficient if each text interviewer can conduct multiple interviews at once, and if the "timeout interval" – elapsed time without a response after which the case is deactivated – is shortened. Our analyses indicate that shortening the timeout period will produce greater efficiency gains than enabling interviewers to conduct multiple text interviews simultaneously. We discuss the need for new efficiency metrics for asynchronous modes like text.


Interviewer-Respondent Rapport in CAPI and Video-Mediated Interviews

Dr Hanyu Sun (Westat) - Presenting Author
Dr Frederick Conrad (University of Michigan)

In video-mediated interviews, the interviewer and the respondent can see and talk to each other via a video window. Some research has suggested that building rapport over video-mediated interviews is problematic due to impoverished visual cues (e.g., Hay-Gibson, 2009) or technical issues (e.g., Anderson, 2008, Seitz, 2016); however, other work has suggested that rapport can be established in video-mediated interviews just as well as in face-to-face interviews (e.g., Iacono et al., 2016, Deakin and Wakefield, 2014), but note that these studies involve qualitative interviews, not conventional survey interviews. In these studies, the interviewers often had multiple prior contacts with the respondents via email exchanges or social media to strengthen rapport establishment in video-mediated interviews. It is unknown if rapport can be established to the same extent in CAPI and video-mediated interviews with topics of varied sensitivity and when the interviewer and the respondent meet for the first time at the start of the interview. To address this question, we conducted a laboratory experiment in which eight professional interviewers interviewed 128 respondents. The respondents were randomly assigned to either a 35-minute CAPI or video-mediated interview, followed by a self-administered questionnaire in which the respondent rated how much rapport they felt with the interviewer. Interviewers were also debriefed after completing each interview and completed the same rapport scales. In the presentation, we will first compare the respondent’s and the interviewer’s rapport ratings to see if they agree with each other on how much rapport they had in the just completed interview. Next, we will examine if respondents’ and interviewers’ sense of rapport varies by mode. Finally, we will explore if the presence of paralinguistic behaviors (e.g. laughter, backchannels) varies by mode.


Gaze Patterns and the Self-View Window in Skype Survey Interviews

Dr Shelley Feuer (U.S. Census Bureau)
Dr Michael Schober (New School for Social Research) - Presenting Author

In videomediated survey interviews, how will the small self-view window affect people's disclosure of sensitive information and self-reported feelings of comfort during the interviews? This study replicates and expands on previous research by (a) tracking where video survey respondents look on the screen—at the interviewer, at the self-view, or elsewhere—while answering questions and (b) examining how gaze location and duration differ for sensitive vs. nonsensitive questions and for more and less socially desirable answers. In a laboratory experiment, 133 respondents answered sensitive and nonsensitive questions taken from large scale US government and social scientific surveys over Skype, either with or without a self-view window. Respondents were randomly assigned to having a self-view or not, and interviewers were unaware of the self-view manipulation. Measures of gaze were recorded using an unobtrusive eye-tracking system. The results show that respondents who could see themselves looked more at the interviewer during question-answer sequences about sensitive (compared to nonsensitive) questions, while respondents without a self-view window did not. Respondents who looked more at the self-view window reported feeling less self-conscious and less worried about how they presented to the interviewer during the interview. Additionally, the self-view window increased disclosure for a subset of sensitive questions, specifically, total number of sex partners and frequency of alcohol use. Respondents who could see themselves reported perceiving the interviewer as more empathic, and reported having thought more about what they said (arguably reflecting increased self-awareness). For all respondents, gaze aversion—looking away from the screen entirely—was linked to socially undesirable responses and self-presentation concerns. The findings demonstrate that gaze patterns in videomediated interviews can be informative about respondents’ experience and their response processes. Findings like these can contribute to the design of new, potentially cost-saving video-based data collection interfaces.