ESRA logo

ESRA 2019 glance program


Split Questionnaire Design 1

Session Organisers Professor Barry Schouten (Statistics Netherlands and Utrecht University)
Dr Andy Peytchev (RTI)
TimeWednesday 17th July, 11:00 - 12:30
Room D18

Over the last ten years, many larger surveys migrated to mixed-mode survey designs including web as a survey mode. In recent years, access to web has diversified rapidly and a variety of devices exist, both fixed and mobile. Online surveys now face a range of devices that they may discourage, accept or encourage. Such decisions depend on the features of surveys and the devices. Three prominent device features are screen size, navigation and timing. Devices can be small as smartphones and large as smart TV's. Navigation can be through touchscreen, to mouse and keyboard. Timing refers to the moment and place the devices are used.
It is generally believed that smartphones demand for a shorter duration of the survey, although empirical evidence is mostly restricted to break-off rates.
This session is about designs that attempt to shorten survey questionnaires without deleting modules for all sample units. So-called split questionnaire designs (SQD) allocate different sections of the questionaire to different (random) subsamples. These SQD are not at all new and have been suggested already several decades ago. However, there never was a sufficiently strong business case to implement them. With the emergence of mobile devices, this business case seems to be strong.
SQD affects questionnaire design and data analysis. The (planned) missing parts of the qustonaire need to be selected in a sophisticated way acknowledging both questionnaire logic and strength of associations between survey variables. Imputation techniques are a natural option but can be quite advanced for some users.
In the session, we invite papers that address one or more aspects of SQD, ranging from questionnaire design to imputation approaches.

Keywords: Smartphone; Adaptive survey design; Imputation

Split Survey Design: Missing Completely at Random?

Dr Kurt Pflughoeft (University of Wisconsin - Stevens Point) - Presenting Author
Ms Sharon Alberg (MaritzCX)

Download presentation

Survey researchers need to place a premium on short questionnaires to encourage participation and accommodate devices such as smart phones. This is especially true for the B2C market where there may be a lower level of engagement with the company’s products. Still there are many aspects of the product experience that need to be measured to provide companies useful information for their decision-making process.

In this study, we examine the use of randomly selecting questions from a module and comparing those results to respondents that have answered all those questions. The questionnaire addresses several industries and the main module contains several questions concerning the customer’s experience. The shortened questionnaire randomly selects a subset of module questions.

Theoretically, the design of the study should mimic a situation that represents “Missing Completely at Random.” We will investigate this aspect with regards to several analysis techniques to determine whether this is the case for our respondents. We are especially interested in the impact of such designs on information measures such as Theil’s relative importance which is a combinatorially-explosive calculation.

Importance will be calculated by processing the data via several different methods. First, we will use the pairwise correlation matrix as the input to Theil’s. Second, we will examine potential grouping by a demographic that may indicate the data is only “Missing At Random.” In the prior case, the use of sampling weights can be used in the importance calculation. Finally, we will examine the use of imputation prior to the importance calculation.

The comparison of a full questionnaire to the reduced questionnaire will help decision makers determine if the reduced survey does not alter results.


Can Push-to-Web Surveys Challenge the Position of Face-to-Face Surveys in Terms of Data Quality?

Ms Jessica Herzing (University of Lausanne) - Presenting Author
Mr Alexandre Pollien (FORS)
Mrs Michèle Ernst Stähli (FORS)
Mr Dominique Joye (University of Lausanne)
Mrs Patricia Milbert (University of Lausanne)
Mr Michael Ochsner (FORS and ETH Zürich)

Many survey conductors switch from existing face-to-face surveys to less expensive web surveys. Because face-to-face surveys can be longer than web surveys, one challenge is how to design a web survey without losing the information covered by the original face-to-face survey. Our paper therefore evaluates four survey designs when switching from a face-to-face survey of the general population to a push-to-web survey. For this purpose, we investigate data quality and issues of representation of (a) a long push-to web survey announced as long, (b) a long push-to web survey announced as short, (c) a short push-to-web survey using matrix design, and (d) a short push-to-web survey using matrix design with a follow-up survey.
Using data from the European Value Study (EVS) in Switzerland in 2017, we compare two long push-to-web surveys (design a, b) and two short push-to-web surveys (design c, d) with the original face-to-face survey, using register data (internal benchmarks) and other general population surveys (external benchmarks). Subsequently, we explore response rates, breakoff rates, relative differences and distributions across the various survey design conditions with internal and external benchmarks. Furthermore, we analyse the predicted probabilities for responding to the survey. Hence, we shed light on non-response and external validity of long push-to web surveys and push-to-web surveys with matrix design. This knowledge is relevant for survey practitioners who want to switch from a face-to-face survey to a push-to-web survey without losing the information covered by the original survey.


Modularization in Web Surveys: The Impact on Nonresponse and Measurement Error

Miss Maria Tezina (Online Market Intelligence, National Research University Higher School of Economics) - Presenting Author
Dr Aigul Mavletova (National Research University Higher School of Economics)

Download presentation

There is some evidence that modularization or chunking the questionnaire into several parts can be efficient in web surveys, especially taking into consideration an increasing proportion of the respondents who complete web surveys on mobile devices. Toepoel and Lugtig (2018) found a lower fraction of missing information, lower item nonresponse rate, and a lower rate of satisficing when a web survey was chunked into 3 or 10 parts compared to the control condition of completing a web survey which took on average 20 minutes. The authors ran an experiment based on Longitudinal Internet Study for the Social Sciences (LISS) Panel in the Netherlands used mostly for academic research. We plan to run the experiment based on online access panel used for business research mainly. Since chunking into 10 parts seems to be almost practically impossible in business-oriented research due to an increasing number of invitations and oversurveying effect we suggest to cut the survey into fewer modules. We plan to run the experiment with the following experimental design: 1) chunking: control condition, experimental conditions: 2, 3, and 5 modules; 2) survey duration: questionnaire of about 20 and 40 minutes, 3) incentives: standard vs increasing incentives in the following module vs incentives at the end of completing all modules. We plan to run the experiment using a volunteer online access panel run by Online Market Intelligence (http://www.omirussia.ru/en) in Russia. Respondents will be randomly assigned to one of the conditions. We will compare data quality between the conditions. In addition, we will ask panelists if they like the idea of modularization and if it should be implemented in the panel. The experiment will be conducted in March 2019. We will test how modularization can be implemented in online access panels and how such elements as incentives and survey duration can be taken into consideration while designing modularization.


Questionnaire Splitting Design: Exploring the Optimal Length and Time

Mrs Evangelia Kartsounidou (Aristotle University of Thessaloniki) - Presenting Author
Dr Ioannis Andreadis (Aristotle University of Thessaloniki)

Download presentation

Nowadays, one of the main challenges of web-based surveys is to be optimized for mobile devices. This increases the need to create shorter survey instruments that usually are associated with higher completion rates and better response quality. Although scholars have explored different ways to create shorter questionnaires, it is rather difficult to define the optimal survey length or design to maximize the data quality of a survey. Using a questionnaire splitting design, this study aims at contributing to this methodological gap by investigating what is the ideal survey length to split an online questionnaire and the optimal duration of the breaks between the sub-questionnaires, at which the data quality of the survey is maximized.
The units of our sample consisted of volunteers and they were split randomly into six different groups. The overall survey duration is ten minutes. We have created three different web surveys with different questionnaire lengths: i) one minute the first part and nine minutes the second part; ii) three minutes the first part and seven minutes the second part, and iii) five minutes each part. To explore the duration of the break between the two sub-questionnaires of the survey: i) we give the respondents the opportunity to answer the second part immediately after completing the first part; ii) we send the invitation to participate in the second part of the survey the next day after completing the first part; iii) after three days; and iv) after six days. This experimental design permits us to explore different combinations between the length and the break time of the sub-questionnaires with the highest data quality. To examine data quality, we measure the participation rate, in terms of response rate and drop-outs, and the response behavior. Possible response quality indicators include non-differentiation, non-substantive responses and response latencies.