ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Talking or Texting: Interviewer-Respondent Interaction in Face-to-Face, Telephone, and Messenger Tool Surveys.

Session Organisers Dr Marieke Haan (University of Groningen)
Dr Yfke Ongena (University of Groningen)
Dr Peter Lugtig (Utrecht University)
Dr Vera Toepoel (Utrecht University)
TimeTuesday 16th July, 16:00 - 17:00
Room D23

Much can be learned from the microanalysis of survey interactions (Bradburn, 2015). Although analyzing these data is often a tedious task, we acquire vital knowledge about verbal behaviors of interviewers and respondents and its effects on survey participation, data collection and measurement quality. This generates knowledge to improve question wording, interviewer training and data collection procedures.

Originally, work on interviewer-respondent interactions is focused on face-to-face and telephone interviews. For example, studies show how rapport works between interviewers and respondents (Garbarski et al. 2016), how verbal interviewer actions can be explained (Haan et al., 2013; Schaeffer et al. 2013), and how respondents’ disfluencies lead to unreliable responses (Schober et al., 2012).

New possibilities of interaction arise with surveys that are conducted online with smartphones. Mobile phones are convenient communication tools for making web surveys more responsive. Messenger tools that people use increasingly to engage with others (e.g. WhatsApp, Facebook Messenger, SMS) can also be used in survey data collection. First, text messaging can be used for surveys by building automatic interviewing systems with messenger chatbots for mobile phones (Schober et al. 2015). These systems can be made more responsive by making use of online probing techniques simulating real interviewers. Second, text messages can be send by real interviewers, which results in interviewer-respondent interaction in a messenger environment. These developments leave us with questions on what specific aspects make interaction, in a talking mode (face-to-face or telephone) or a texting mode (via messenger tools) most productive, engaging and stimulating.

We invite researchers to present their work on survey interactions in talking and texting modes. Papers may concern a variety of topics but should include analysis of verbal behaviors of interviewers or simulated interviewers (i.e., virtual agents, chatbots), respondents, or both.

Keywords: smartphone, interaction, probing

Do Interviewers Accurately Code Answers for Field Code Items? An Experiment on the Effect of Number of Response Categories

Dr Jolene Smyth (University of Nebraska-Lincoln) - Presenting Author
Dr Kristen Olson (University of Nebraska-Lincoln)

Surveys often ask interviewers to code a set of answers to an open-ended question into a set of closed ended nominal categories. Previous literature shows that this process is error prone (e.g., Fowler and Mangione 1990; Lepkowski et al. 1995; Mitchell et al. 2008; Rustemeyer 1977; Strobl 2008). A recent analysis of one item with 19 nominal response options revealed that interviewers inaccurately entered over half of responses to this question (Smyth & Olson 2011). Surprisingly little research addresses the behaviors of interviewers and respondents during the administration of these items or evaluates why error rates are so high. One possible reason is that interviewers struggle to search through a long list of categories. In this paper, we experimentally examine how the length of the list of response categories affects response distributions, interviewer and respondent behaviors, and interviewer recording accuracy for two questions. The first asks for an occupation and requires interviewers to code one response into either 8 or 17 different broad substantive categories or an “other, specify” option. The second asks what activities people do online and requires the interviewer to code all responses into either 6 or 18 categories or an “other, specify” option. We use the Work and Leisure Today II survey, which was conducted by telephone in summer 2015 (n=911, AAPOR RR3=7.84%). Recordings of the interviews were transcribed and behavior coded at the conversational turn level. Preliminary analyses indicate that response distributions do not differ for the occupation question but do differ for the online activities question. We conclude with implications for questionnaire design and interviewer training.


Differences in Interviewer-Respondent Interactions in Surveys Administered by Telephone or in Person

Dr Yfke Ongena (University of Groningen) - Presenting Author
Dr Marieke Haan (University of Groningen)

When choosing a mode for data collection of computer-assisted surveys, a researcher has three main options available: the computer-assisted telephone interview (CATI), the computer-assisted personal interview (CAPI) or a web interview (i.e., a self-adminstered interview). Generally, CAPI allows for collecting most complex data, of the highest quality. This higher data quality in CAPI interviews may be due to the finding that presence of an interviewer reduces the amount of respondents’ satisficing behaviors (see Heerwegh 2008). An interviewer can motivate respondents, presumably by means of rapport (Garbarski et al 2016). However, with interviewers administering the survey their social presence may also increase socially desirable responding. By looking at response distributions it has been shown that social desirability bias and satisficing are more prevalent in CATI than in CAPI (see Holbrook et al 2003), and lowest in (self-administered) Web interviews.

By analyzing interactions of 60 CATI and 54 CAPI interviews that originated from a mixed-mode experiment using the European Social Survey questionnaire (Haan 2015), we found mixed differences with respect to behaviors in CATI and CAPI interactions. For example, interviewer laughter appeared to be more common in CATI than in CAPI, but apologetic utterances such as ‘sorry’ occurred equally often in both modes. Furthermore, a significant difference was found in the number of words uttered. Question-answer sequences contained more words in CATI than in CAPI. Further analysis showed that respondents in CATI had more difficulty in formulating their response than in CAPI. These task-related issues may contribute to the effect of decreased trust and motivation of respondents in CATI interviews, and may subsequently explain the increased level of satisficing and social desirability bias in this survey mode compared to CAPI.


Adapting Surveys to the Modern World: Comparing a Researchmessenger Design to a Regular Responsive Survey Design for Online Surveys

Dr Vera Toepoel (utrecht university) - Presenting Author
Dr Peter Lugtig (utrecht university)
Dr Marieke Haan (groningen university)
Dr Bella Struminskay (utrecht university)
Miss Anne Elevelt (utrecht university)

Relevance & Research Question: In recent years, surveys are being adapted to mobile devices. This results in mobile-friendly designs, where surveys are responsive to the device being used. How mobile-friendly a survey is, depends largely on the design of the survey software (e.g. how to deal with grid questions, paging or scrolling designs, visibility, tile design etc.) and the length of the survey. An innovative way to administer questions is via a researchmessenger, a whatsapp-like survey software that communicates as one does via whatsapp (see www.researchmessenger.com). In this study we compare a researchmessenger layout to a responsive survey layout in order to investigate if the researchmessenger provides similar results to a responsive survey layout and if the researchmessenger results in more respondent involvement and satisfaction.

Methods & Data: The experiment has been carried out in 2018 using panel members from Amazon Mechanical Turk in the United States. Respondents were randomly assigned to the researchmessenger or the responsive survey. In addition, we randomly varied the type of questions (long answer scale, short answer scale, open-ended). To investigate question order effects-and possible respondent fatigue dependent on the type of survey- we randomly ordered blocks of questions. 1728 respondents completed the survey.

Results: We will investigate response quality (e.g. response distribution/mean scores, #check-all-that-apply, #don’t know, item missings and drop out, use of back button), survey duration, and respondents’ evaluation of the questionnaire. Respondents could self-select into a particular device. We will also compare results obtained via different device. We will show a video of the layout of both the researchmessenger and regular survey.

Added Value: The experiment identifies recommendable design characteristics for an online survey in a time were survey practitioners need to rethink the design of their survey since more and more surveys are being completed on mobile phones.