ESRA logo

ESRA 2019 glance program


Usability Testing in a Changing Survey Environment 1

Session Organiser Mrs Emily Geisen (RTI International)
TimeTuesday 16th July, 14:00 - 15:30
Room D24

Technology is constantly evolving, which promotes innovation and advancements in the way we collect survey data. Due to these technological advances, the ways that respondents and interviewers interact with surveys are changing. For example, modern web surveys include features such as touch-screen, videos/images, GPS, sensors, voice recognition and text-to speech, dynamic error messages, and other capabilities. Each of these features changes the respondent-survey interaction, which can affect the quality of the data collected. As a result, usability testing is critical to ensure that respondents can complete surveys accurately, efficiently and with satisfaction. Technological advances have also affected methods available for conducting usability testing such as improved eye-tracking equipment and remote, unmoderated user testing. This session invites presentations that either showcase usability testing of surveys with technological advances or demonstrate innovative methods for conducting usability testing on surveys. We particularly invite presentations employing (1) usability testing of surveys with advanced technological features (e.g., sensors, machine-learning), (2) usability testing of new survey platforms (e.g., Blaise 5), (3) innovations or advances in existing usability methods, or (4) usability testing in multicultural contexts. We are also interested in studies that empirically demonstrate the utility and benefit of usability testing.

Keywords: Usability testing, eye-tracking, user experience, remote, unmoderated

Using Eye-Tracking to Study the Effects of Three Different Grid Questions Designs on Response Burden in Web Surveys

Dr Joss Roßmann (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences)
Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)

In self-administered surveys, such as web surveys, the use of grid (or matrix) questions has been very popular. Yet, previous research has provided evidence that particularly cognitively demanding grid questions have serious negative effects on response quality compared to alternative designs such as, for instance, item-by-item presentation of the questions. However, some authors have argued that grid questions are an efficient question format because they reduce the (perceived) length of questionnaires. As a consequence, grid questions might reduce the response burden for respondents. In line with this assumption, many previous studies have found shorter response times for grid questions compared to alternative question formats (e.g., Couper, Traugott, and Lamias 2001; Tourangeau, Couper, and Conrad 2004). Recently, a study by Roßmann, Gummer, and Silber (2017) reported that the response time for the first question item did not differ between a grid and an item-by-item design, thus, raising the question whether longer response times for item-by-item designs result from deeper cognitive processing in answering the remaining items of the question. This contribution aims at answering the research question whether altering the question format affects the depth of cognitive processing in answering grid questions.

We use data from three fully randomized, between-subject experiments, which we implemented in an eye-tracking study with 131 participants from different socio-demographic strata. The respondents answered the ten items of the questions either on a single page, on two pages with five items each, or on ten separate pages. We use response times as well as fixation times and fixation counts to study differences in the response process between the three different grid question designs. We conclude with recommendations for the design of grid questions and an outlook on prospective research opportunities.


Address & Navigation Layouts: How Eye Tracking and Usability Testing Informed U.S. Census Bureau Survey Designs

Ms Erica Olmsted-Hawala (U.S. Census Bureau)
Ms Elizabeth Nichols (U.S. Census Bureau)
Dr Jen Romano-Bergstrom (Bridgewater Associates) - Presenting Author
Mrs Sabin Lakhe ( )

The way a user interacts with surveys can impact survey data quality. Usability lab staff at the U.S. Census Bureau aimed to test variations of survey screens, specifically regarding navigation button placement and address field layouts. In the first study, we used eye tracking and usability testing to understand the optimal location of navigation button placement in a web-based survey. We examined whether placement of the buttons (left or right) impacted users’ time to look at and time to click the forward navigation button as well as overall satisfaction with the survey. In the second study, we used eye tracking and usability testing to assess two different ways to collect address information. We examined whether respondents saw and read the address question, the instructions, and other input fields: we compared the number of fixations per character as a metric. Incorporating eye tracking into traditional user testing allowed us to understand the better placement of navigation buttons, as well as some design issues with address fields.


Improvement of Mobile Questionnaire Through Usability Evaluation

Dr Sunhee Park (Statistics Korea) - Presenting Author

Recently, as information and communication technologies have developed, research tools have been diversified. Especially, the use of mobile devices is expanding. Mobile devices have small screen size and touch input. This study introduces the process of designing a questionnaire that is easy to review and respond to the convenience of the mobile survey through usability evaluation. In this study, we observed response behavior and tracked the gaze while responding to the census. In this study, we evaluated all processes from accessing the mobile questionnaire to the finalization stage. In particular, we evaluated the ease of invitation search and method needed for the connection. We also analyzed how screen composition, scrolling, and arrangement of questions affect the response behavior of the mobile questionnaire. Two usability evaluations were conducted in this study. In two usability evaluations, the respondent's general reaction pattern is observed when radio buttons and text boxes are presented, and the useful design for respondents is introduced. Through this study, we expect to share usefulness of usability evaluation and knowledge about designing mobile questionnaire.


Usability Testing of the Time Use Survey – App: Pitfalls and Recommendations for Improvement

Mrs Matea Paškvan (Statistics Austria) - Presenting Author
Mrs Sonja Ghassemi-Bönisch (Statistics Austria)
Mr Marc Plate (Statistics Austria)
Mrs Kathrin Gärtner (University of Applied Sciences Wiener Neustadt)
Mr Gabriel Kittel (Statistics Austria)
Mr Friedrich Csaicsich (Statistics Austria)
Mr Ivo Pocorny (Modul University)
Mrs Constanze Volkmann (Vienna University of Economics and Business)
Mrs Sonja Hinsch (Statistics Austria)
Mrs Manuela Heidenreich (Statistics Austria)

The time-use survey (TUS) is a prominent statistical survey realized by national statistical agencies. Capturing how people spend their time, the TUS data is used to classify and quantify main activities of a broader population through Europe. Traditionally data was obtained using paper-pencil diaries that had to be filled in by the respondent using a fixed 10-minute interval. Showing that is method is rather user-unfriendly, new approaches highlight the importance to use apps in order to capture data on time and to offer more user-centred support. This study for the first time introduces the TUS-App (iOS and Android) in Austria and test its usability based on a cognitive test-design that includes video recording during app use as well as extensive interviews. The test shows that the app was working adequately but there is still some issues which have to be covered: loading time of the app and different devices with different operating versions can result in performance problems, 144 time slots - which have to be filled in - are challenging, look-up tables (i.e. predefined list of activities that will appear after typing some initial letters) are not always helpful. Especially the 144 ten-minute slots, although they are meant to help the respondents to be accurate, rather lead to imprecise entries. A calendar-like-version of the app with a start-and end button for each activity might cover this problem. Instead of providing a determined and given activity list using look-up tables, we advise to use the build-in-dictionary of the mobile device in connection with the autocomplete function. The detailed reconstruction of the day is burdensome and the users of an app are used to get an instant gratification (graphic demonstration of the personal time etc.). Moreover, the app should give the participants immediate feedback.