ESRA 2017 Programme
|ESRA Conference App|
Wednesday 19th July, 11:00 - 12:30 Room: Q2 AUD2
Probability-based research panels 2
|Chair||Mr Darren Pennay (Social Research Centre, Australian National University )|
Session DetailsAround the world nowadays online panels are routinely used as a method for gathering survey data for many varied purposes; including for economic, political, public policy, marketing, and health research.
Web surveys, most of which are conducted via online panels, are a relatively recent development in the history of survey research; starting in the United States and Europe in the mid-1990s and then expanding elsewhere in the world. Worldwide expenditure on online surveys has quadrupled in the last 10 years from $US1.5B in 2004 to $US6B in 2014.
From the mid-1990s to the mid-2000s, there was an exponential growth in the creation of online panels and increases in the sizes of the membership of such panels. This led to a proliferation of unique panel vendors. But since 2005, the developing need for panels with extremely large number of panellists led to a consolidation of panel vendors through the means of corporate acquisition (cf. Callegaro, Baker, Bethlehem, Göritz, Krosnick and Lavrakas, 2014).
In 2015, the vast majority of online panels, as well as the vast majority of people who participate in them, have been established/recruited via non-probability sampling methods.
In United States, parts of Europe, and now in Australia, the increased use of the web for data collection also resulted in establishment of probability based online research panels to enable the scientific sampling of the population.
The intent of this session is to explore the development of probability-based online panels around the world and to encourage survey practitioners involved in probability-based online panels to present papers exploring the various methods used to establish and maintain these panels. Papers might explore issues such as the methods for including the offline population, methods to maximise response and minimise attrition and methods to reduce measurement error when administering questionnaires to panellists.
It is hoped that this session would be of interest to probability-based online panel practitioners as well as researchers who routinely use probability and non-probability online panels or want to learn more about such panels
Paper Details1. Effects of contact letters and incentives in offline recruitment to a probability-based online panel survey
Mr Nicolas Pekari (FORS, Swiss Centre of Expertise in the Social Sciences)
Online panels based on probability samples have typically relied on costly recruitment procedures involving interviewers, but, especially for short-term panels, the costs of using face-to-face or telephone recruitment may be prohibitive. On the other hand, non-probability based online panels, while much less costly, raise concerns regarding how well they represent the general population and hence, the accuracy of the estimates they produce.
One possibility for reducing the costs and complexity of recruiting a probability-based panel is to use mail as the only mode of contact. The main advantage of this approach are lower costs and much less requirements in terms of infrastructure and personnel. This also allows institutions to conduct surveys in-house to save costs and to have control over the whole procedure. The main disadvantage is that as there is no interviewer present to convince people to join or address concerns of respondents. The content and design of contact letters and incentives thus become the fundamental elements determining whether a person joins and remains in the panel. However, past research is lacking regarding how to design these elements to ensure the success of this type of survey.
To address these shortcomings, the aim of the current study is to evaluate the effects of both letter wording and incentives on participation in the first wave of the survey, enrolling in the panel, participating in subsequent waves, and possible biases in sociodemographics and key estimates. The focus is on how to inform participants about the panel component and the addition of a conditional incentive to the existing prepaid incentive.
We use data collected in a three-wave pilot experiment conducted in the context of the 2015 Swiss Electoral Study (Selects). The sample consisted of 2,700 German-speaking Swiss citizens 18 years or older living in Switzerland and included the names and addresses of the individuals, as well as basic sociodemographics. The experimental design consisted first of two groups with a different wording of the letters, one in which the panel aspect was described in detail and another where the question was addressed more vaguely. Each group was then separated into three groups with different conditional incentives. In addition to a 10CHF prepaid incentive, respondents would either be promised another 10CHF, participate in a raffle of five iPads, or receive no additional incentive as a control group.
We compare the response rates in the first wave, the proportion enrolling in the panel (i.e. providing a valid e-mail address), and attrition among the different groups. We then study the determinants of these aspects as well as effects of the different conditions on the representativity of the sample. Finally, we provide an analysis of costs regarding the different options and provide a discussion and recommendations for conducting similar surveys as well as avenues for further research.
2. Is shorter always better? An experimental variation of the length of the recruitment interview for a probability-based online panel.
Ms Ines Schaurer (GESIS - Leibniz Institute for the Social Sciences )
Recruitment interviews for probability-based online panels are designed with the overall aim of a low respondent burden to foster recruitment. The length of an interview is mentioned by Bradburn (1978) as one of the central survey features that is related to survey burden.
However, research on the effect of interview length on next wave participation is inconclusive and rare.
With the emergence of probability-based panels in the social sciences, identifying the optimal recruitment interview length becomes paramount. In our presentation, we will address the research question whether a short telephone recruitment interview is superior to a longer version of the interview in terms of panel recruitment and survey participation.
We analyze data from the GESIS Online Panel Pilot study, an offline-recruited probability-based online panel of Internet users. Starting in February 2011, respondents were surveyed online every month for 8 months.
To test the effect of a very short interview on the recruitment success a survey experiment was implemented that varied the length of the telephone recruitment interview. Two versions of the recruitment interview were randomly assigned to the telephone number before the call. 90% (n=1372) of the interviews were randomly assigned to the regular interview condition, and 10% (n=197) of the interviews were assigned to the short interview condition. For the shorter version, the recruitment interview was reduced to an introductory question on living in Germany, basic demographic information (sex, age, school education, employment status), and the panel participation request. The regular interview additionally included several questions about the life in Germany, attitudinal questions, and questions about landline and mobile phone access. It was announced with ten minutes duration, the shorter version with three minutes duration.
The outcome variables are the recruitment probability for the panel and the cumulative response rate. The effect of the interview length on sample composition is analyzed as well.
The bivariate analysis shows that the recruitment rate for the regular interview is eight percentage points higher (47%) in comparison to the short interview (39%). The fact that the announced length of the telephone interview could have an effect on who decided to participate in the telephone interview made it was necessary to control for sample differences that were introduced at the stage of the telephone interview. After controlling for the education of the respondents, the direction of the effect remains the same but was not significant anymore. We assume that the negative effect of the short interview is mainly attributable to differences that occurred on the previous selection step. Lower educated respondents seem to be attracted more by the shorter announced interview compared to respondents with higher education.
Finally, the effect of the experimental manipulation on the sample composition of the resulting online sample was evaluated. The analysis showed no systematic differences in sample composition among the two experimental groups based on a limited set of characteristics. The varying length of the telephone recruitment interview did not introduce sample composition bias in the resulting online sample.
3. From face-to-face to mobile Internet: replicate the French ESS questionnaire on the ELIPSS panel.
Miss Emmanuelle Duwez (CDSP (Sciences Po))
Mr Simon Le Corgne (CDSP (Sciences Po))
Mr Malick Nam (CDSP (Sciences Po))
Mr Mathieu Olivier (CDSP (Sciences Po))
The European Social Survey (ESS) is an academically driven cross-national survey conducted every two years across Europe, in which France has participated since the first round in 2002. The ESS measures the attitudes, beliefs and behaviour patterns of various populations in more than thirty countries. One of its main goals aims to track stability and changes in the social structure of European societies and to provide analysis elements on how Europe’s social, political and moral fabric is changing. In the ESS, data are collected via face-to-face interviews.
In France, the fieldwork of the 7th round led to a survey replication on the pilot of the Elipss panel (Longitudinal Internet Studies for Social Sciences). Elipss is a probability based online panel that is representative of the French population aged 18-75. Panel members are randomly selected by The French National Institute of Statistics and Economic Studies (INSEE) and equipped with a touch-screen tablet and a 3G Internet subscription. Every month they are asked to answer a 30 minutes self-administered questionnaire proposed by researchers and selected by a scientific and technical committee.
The Center of Socio-Political Data of Sciences Po coordinates the ESS fieldwork for France and conducts the Elipss panel. Consecutively we developed an expertise on the methodology and the process of each mode.
The face-to-face fieldwork was carried out from November 2014 to February 2015. To replicate it on the Elipss panel, we used a slot between December 2014 and January 2015. The administration of the first part of the ESS core questionnaire on the Elipss panel gave an opportunity to wonder how the different strategies for collecting data may impact the response behaviour.
Knowing that differences are already emerging in the specificities of these two protocols, the questionnaire needed some adjustments that we must consider in such a comparison. Indeed, for its replication in a self-administered mode on a mobile device we had to adapt the design of some questions, and it could have impacted the answer situation.
The difference in the structure of the sample should be taken into account to explain the observed differences. The length of the questionnaire, the format of the answer categories, the presence or absence of an interviewer, the survey experience of Elipss panel members could also account for differences in response behaviour.
Focusing on type and design of questions, we will pay special attention to the social desirability effect often pointed out in face-to-face surveys.
This paper will highlight the specificities of the two survey designs (face-to-face vs self-administered online questionnaire) in order to discuss the scope of such a comparison. Finally, we will compare the answers according to whether the data is collected by interviewers or self-administered on mobile device.
4. Converting Panelists from Mail Mode to Web Mode in Pew Research Center’s American Trends Panel
Mr Nick Bertoni (Pew Research Center)
Online probability-based panels offer a number of attractive benefits compared to the traditional RDD telephone survey. While offering researchers increased efficiency and flexibility in data collection, this emerging shift towards the web does not come without its own set of unique obstacles to overcome. A key challenge facing researchers maintaining these panels is how to cover the segment of the population that does not have internet access.
Since its inception a portion of the Pew Research Center’s American Trends Panel members have been interviewed by mail. Having conducted the panel for almost 3 years now, we have observed that this mail component presents two major limitations – it requires a long field period (which makes quick, timely polling impossible) and it severely restricts questionnaire design (extensive skips, fills, randomization are all impractical). To overcome these constraints, we set out to convert the mail component of the panel to web so that we will have an entirely online panel.
An important consideration while undertaking such a conversion is the composition of the panelists involved. We know that the characteristics of mail panelists are different than those who participate via web, so converting as many panelists as possible was given great attention to avoid attrition bias caused by panelists failing to convert. This paper discusses the sample composition of those who converted to web and compares that to the composition of to those who did not convert. We take a close look at how the conversion affected the representativeness of the post-conversion panel and data quality more generally. These findings should be informative to other panel survey designers looking for ways to cover the non-internet population.