ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 09:00 - 10:30 Room: Q2 AUD2


Probability-based research panels 1

Chair Mr Darren Pennay (Social Research Centre, Australian National University )

Session Details

Around the world nowadays online panels are routinely used as a method for gathering survey data for many varied purposes; including for economic, political, public policy, marketing, and health research.

Web surveys, most of which are conducted via online panels, are a relatively recent development in the history of survey research; starting in the United States and Europe in the mid-1990s and then expanding elsewhere in the world. Worldwide expenditure on online surveys has quadrupled in the last 10 years from $US1.5B in 2004 to $US6B in 2014.
From the mid-1990s to the mid-2000s, there was an exponential growth in the creation of online panels and increases in the sizes of the membership of such panels. This led to a proliferation of unique panel vendors. But since 2005, the developing need for panels with extremely large number of panellists led to a consolidation of panel vendors through the means of corporate acquisition (cf. Callegaro, Baker, Bethlehem, Göritz, Krosnick and Lavrakas, 2014).

In 2015, the vast majority of online panels, as well as the vast majority of people who participate in them, have been established/recruited via non-probability sampling methods.
In United States, parts of Europe, and now in Australia, the increased use of the web for data collection also resulted in establishment of probability based online research panels to enable the scientific sampling of the population.

The intent of this session is to explore the development of probability-based online panels around the world and to encourage survey practitioners involved in probability-based online panels to present papers exploring the various methods used to establish and maintain these panels. Papers might explore issues such as the methods for including the offline population, methods to maximise response and minimise attrition and methods to reduce measurement error when administering questionnaires to panellists.

It is hoped that this session would be of interest to probability-based online panel practitioners as well as researchers who routinely use probability and non-probability online panels or want to learn more about such panels

Paper Details

1. Usage of Online Panels in Survey Methodology Field: a Systematic Review
Miss Chiara Respi (University of Milano-Bicocca)
Professor Katja Lozar Manfreda (University of Ljubljana)

In addition to being a sample source for substantive research in social and marketing research, online panels are often used in survey methodology field. In particular, they serve as (1) an object of research itself or (2) a sample source for various experimental studies on survey data quality. This calls for a review of their usage and quality as used in survey methodology. Within this context we present a systematic review that aims to assess (i) the types of online panels which are used in survey methodology, (ii) the quality of these online panels, and (iii) the usage of online panels as a sample source for research on survey data quality.

We use a bibliographic database by WebSM, which is the main information source on web survey methodology (Hewson et al., 2016; Callegaro et al., 2015; Lozar Manfreda and Vehovar, 2007), to select empirical studies from survey methodology that relate to online panels. To address our research questions we define specific coding categories for the type and other characteristics of online panels (e.g., panel size, target population, etc.), and purpose of the usage of online panels. Related to the purpose, we code various (i) panel data quality aspects for studies on the quality of the panel itself, and (ii) methodological questions for studies using panels as a sample source for survey methodology research. We analyzed 66 journal papers and book sections on survey methodology, published in English from 2012 to 2016, which refer to online panels.

The results show that online panels are a valid sample source for various experimental studies in survey methodology. In addition, the systematic review highlights partially or entirely unexplored issues about the quality of panelists’ answers, while coverage and nonresponse issues in online panels are already well researched.

This paper casts light on features characterising the setting of different types of online panels, points out the main indicators of their data quality, and studies the feasibility of the usage of online panels for survey methodology. A more general aim is to increase the knowledge on the usage of online panels, which is one of the current and future trends in survey research.


2. Building and Maintaining a True Probability-based Internet Panel
Mr Joris Mulder (CentERdata, Tilburg University)

The MESS project facilitates advanced data collection using the Internet, new measurement devices, and the latest technology. The project is carried out by CentERdata (Tilburg University, The Netherlands). The core element of the project is the LISS panel: an Internet panel specifically intended for scientific research (Longitudinal Internet Studies for the Social sciences).

The LISS panel consists of approximately 8000 individuals who participate in monthly Internet surveys and advanced experiments. The panel is based on a random sample drawn from the population registers. Persons not included in the original sample cannot participate, so there can be no self-selection. People without a computer or Internet connection are provided with equipment to participate, so the offline population is included as well.
Every month, LISS panel members complete online questionnaires of about 30 minutes in total. They receive a monetary incentive for each completed questionnaire. One member of the household provides the household data and updates this information at regular time intervals.

In this presentation we discuss the initial procedures for setting up such a panel (i.e. sample and recruitment of panel members) and how to maintain it. We show results of several incentive experiments, how this is helpful in the recruitment phase, how it can help prevent attrition during panel participation and the effects on overall response. We also discuss the results of our ‘sleepers study’, an experiment on encouraging and maintaining participation in the LISS panel (using letters, incentives and feedback) to minimize attrition.


3. Recruiting and Representativeness of a Probability Internet Panel
Ms Jill Darling (University of Southern California)
Dr Arie Kaptyn (University of Southern California )
Ms Tania Gutsche (University of Southern California )

The Understanding America Study (UAS) is a probability-based internet panel. Panel members are recruited exclusively through Address Based Sampling; respondents without prior access to the internet receive a tablet and paid broadband internet access. The UAS started in 2013 and currently comprises about 6000 panel members 18 and over. Panel members usually answer one or two surveys per month (with a maximum length of 30 minutes per survey). The result is a very large set of background characteristics available for all panel members, including the complete core instrument of the US Health and Retirement Study, which is administered every two years, personality measures (the big five), and numeracy, cognitive, and financial literacy measures. Respondents receive compensation for their time spent answering questions at a rate of $20 per 30 minutes of interview time. Annual attrition rates are modest (on the order of 7-8% per year). The UAS panel’s response rates, while low compared to historical survey samples, tend to be somewhat higher than the falling rates experienced by many telephone surveys. After an initial multi-step recruitment process (Dillman 2014), on average, more than a third (36%) of our national samples returns the initial paper survey which includes consent to contact them about participating in the panel. After a series of empanelment steps, about 12% eventually become active participants. During our 2016 recruitment drive, we implemented a series of experiments aimed at increasing response and retention at each stage of the process, and in this paper we share results and next steps. We argue that our methods increase the panel’s representativeness of the U.S. population over what can be achieved without providing internet devices. Of the current nearly 6000 panel members, 310 have received an internet connected tablet (5.2%). Similar to Leenheer and Scherpenzeel (2013) for the LISS panel in the Netherlands, we find that in comparison to the Internet households, non-Internet households in our panel are more likely to have low incomes and low education, to be non-white, less healthy, and older (70+). Importantly, the recruiting rate among these households varies little across subgroups, except for elderly who tend to sign up somewhat less. This suggests that with respect to the demographics studied, the recruited non-Internet households are representative of the part of the population without Internet (except possibly with respect to age). Since, moreover, the non-Internet households are very different from households with prior Internet, this suggests that adding a non-Internet sample addresses the coverage deficiency of an Internet-only sample. We report on the methods we use to recruit and retain our panel, as well as lessons learned from our recruiting experiments.


4. The feasibility of establishing a cross-national probability-based online panel
Dr Ana Villar (City, University of London)
Ms Elena Sommer (City, University of London)
Mr Didrik Finnøy (Norwegian Centre for Research Data)
Mr Bjørn-Ole Johannesen (Norwegian Centre for Research Data)
Dr Mare Ainsaar (University of Tartu)
Professor Slavko Kurdija (University of Ljubljana)
Mr Alun Humphrey (NatCen Social Research)
Mr Indrek Soidla (University of Tartu)
Ms Tina Vovk (University of Ljubljana)

Online survey panels enable cost-effective and timely data collection. However, obtaining a representative sample can be challenging and expensive: sample frames for probability sampling of the general population usually do not include online contact information, and recruitment thus relies on offline contacts that either suffer from low recruitment rates (telephone and mail) or involve large costs (face-to-face). The problem multiplies when researchers embark upon cross-national data collection, where challenges and solutions vary across countries. Random selection of individual respondents within households, for example, poses different challenges depending on the available sampling frames. Instead of setting up a brand new recruitment effort to build a panel, researchers could potentially piggy-back on existing survey projects that have already addressed the issue of selecting and contacting sample units, inviting those who agree to participate in the survey to continue cooperation further. Cost-efficient as this approach may seem, it has been met in the past with concerns about the additional burden for respondents and interviewers. A pilot study was set up to test this “piggy-backing” approach using the European Social Survey data collection efforts. Participants of the CROss-National Online Survey (CRONOS) panel were recruited at the end of the 2016 ESS interviews in UK, Slovenia and Estonia. Participation involved completing six 20-minute online surveys over one year. Respondents were offered gift cards for £5/€5 with every new survey invitation. Those without internet access for private purposes were offered an internet-enabled tablet, training on how to use it, and telephone support. This paper evaluates efficiency of this approach by examining recruitment rates across countries and across different subpopulations. We will analyse representativeness of the recruited sample, comparing those who participated to those who did not in important variables from the ESS interview. The initial recruitment rate varies by country—between 51% in Estonia and 61% in the UK; by age—between 80% for respondents younger than 30 and 46% among those older than 60; by gender—54% for women vs 60% for men; and by internet access—50% for those without internet access for personal use compared to 78%. In addition, this paper will consider the challenges of setting up a cross-national probability-based online panel.


5. Design, setting-up and outcome indicators of a probability-based online panel in Spain
Ms Sara Pasadas del Amo (IESA/CSIC)
Ms Rafaela Sotomayor Lozano (IESA/CSIC)
Mr Manuel Trujillo Carmona (IESA/CSIC)
Mr Juan Antonio Domínguez Álvarez (IESA/CSIC)
Ms Carmela Gutiérrez Aranda (IESA/CSIC)

The presentation will look into the processes and outcomes of designing, setting up and maintaining a probability-based online panel in Andalusia. The Institute for Advanced Social Studies (IESA), a unit of the Spanish National Research Council (CSIC), recruited the Citizen Panel for Social Research in Andalusia (PACIS) from September 2014 to March 2015. PACIS members were recruited face to face using the official address list (catastro) to select the units in the sample and off-line members are contacted and interviewed by telephone (landline and mobile). 3.439 people living in 2.809 households compose the panel that has served as a sampling frame for three cross-sectional studies conducted in 2015 and 2016.
The paper describes sampling design and the procedures for recruiting and maintaining the panel. Then we will present the outcomes rates attained at the recruitment stage and on each of the three waves conducted until now. Finally, we analyze the sample composition and potential biases produced by the combination of nonresponse at both the recruitment and the surveying stages.