ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Wednesday 19th July, 16:00 - 17:30 Room: Q2 AUD2


Probability-based research panels 3

Chair Mr Darren Pennay (Social Research Centre, Australian National University )

Session Details

Around the world nowadays online panels are routinely used as a method for gathering survey data for many varied purposes; including for economic, political, public policy, marketing, and health research.

Web surveys, most of which are conducted via online panels, are a relatively recent development in the history of survey research; starting in the United States and Europe in the mid-1990s and then expanding elsewhere in the world. Worldwide expenditure on online surveys has quadrupled in the last 10 years from $US1.5B in 2004 to $US6B in 2014.
From the mid-1990s to the mid-2000s, there was an exponential growth in the creation of online panels and increases in the sizes of the membership of such panels. This led to a proliferation of unique panel vendors. But since 2005, the developing need for panels with extremely large number of panellists led to a consolidation of panel vendors through the means of corporate acquisition (cf. Callegaro, Baker, Bethlehem, Göritz, Krosnick and Lavrakas, 2014).

In 2015, the vast majority of online panels, as well as the vast majority of people who participate in them, have been established/recruited via non-probability sampling methods.
In United States, parts of Europe, and now in Australia, the increased use of the web for data collection also resulted in establishment of probability based online research panels to enable the scientific sampling of the population.

The intent of this session is to explore the development of probability-based online panels around the world and to encourage survey practitioners involved in probability-based online panels to present papers exploring the various methods used to establish and maintain these panels. Papers might explore issues such as the methods for including the offline population, methods to maximise response and minimise attrition and methods to reduce measurement error when administering questionnaires to panellists.

It is hoped that this session would be of interest to probability-based online panel practitioners as well as researchers who routinely use probability and non-probability online panels or want to learn more about such panels

Paper Details

1. Nonresponse and Attrition in a Probability based Online Panel
Professor Edith De leeuw (Utrecht University)
Professor Joop Hox (Utrecht University)
Mr Benjamin Rosche (Utrecht University)

Probability-based online panels are regarded as state of the art data collection tools in Europe and the USA (e.g., LISS in Holland, GIP and GESIS panel in Germany, ELIPSS in France, and GfK-Knowledge Networks in the USA) and are now being established in Australia and Canada. However probability-based panels are also vulnerable to nonresponse during data collection and especially attrition is a constant worry of panel managers. Several theories on nonresponse have been developed over the years and attitudes towards surveys are key concepts in these theories. Therefore our research Question is: Do survey attitudes predict wave nonresponse and attrition better than standard indicators of nonresponse, such as, age, education, income, and urbanization?

To measure survey attitude a brief nine-question scale was developed for use in official statistics and (methodological) survey research. Key dimensions are survey value (value ascribed to surveys, e.g., surveys are seen as important for society, much can be learned by surveys), survey enjoyment (reflecting the assumption that respondents do like participating in surveys, e.g., surveys are seen as enjoyable and interesting), and survey burden (reflecting an increase of demands, e.g., too many survey requests, to invasive, too long). Preliminary research in four online panels indicated that the scale is reliable and has predictive validity

The data comes from the Dutch probability-based online LISS-panel. The Survey Attitude Scale was part of the annually measured core-questionnaire from 2008 - 2011. Furthermore, the number of completed questionnaires and number of invitations was available for each panel member over the years. Also available were 34 demographic and psychographic variables. Drawing on expert opinions from 31 survey methodologists, the most important correlates of nonresponse are added as control variables to out model. To predict the number of completed interviews and determine the explanatory power of the survey attitude scale, a longitudinal negative binomial regression is employed.

The Survey Attitude Scale consists of 3 sub-constructs: enjoyment, value, and burden. Respondents who perceived a survey one unit more enjoyable (on average across waves on a scale from 1 to 7) are estimated to complete roughly 1.22 times as many or 22% more interviews per year. The same attitude change with respect to the perceived survey value (one unit on average across waves) corresponds to merely 8% more interviews. Finally, a one-unit increase in the perceived survey burden reduces the number of completed interviews by 12%. These results hold even when control variables (e.g., age, education, urbanization) are added to the model. The regression coefficients of the survey attitudes hardly change although most controls are significant.

Hence, survey attitude is a strong predictor of nonresponse over and above a person’s psycho-demographic profile. This makes it possible to identify potential nonrespondents in an online panel early on and use tailored designs to improve response and reduce attrition. Moreover, emphasizing to respondents the positive sides of survey enjoyment instead of survey value and actively decreasing survey burden seems promising.


2. The Accuracy of Online Surveys: Coverage, Sampling, and Weighting
Professor Annelies Blom (School of Social Sciences, University of Mannheim)
Ms Daniela Ackermann-Piek (German Internet Panel, SFB 884, University of Mannheim)
Ms Susanne Helmschrott (German Internet Panel, SFB 884, University of Mannheim)
Ms Carina Cornesse (German Internet Panel, SFB 884, University of Mannheim and GESIS - Leibniz Institute for the Social Sciences)
Dr Christian Bruch (German Internet Panel, SFB 884, University of Mannheim)
Professor Joseph Sakshaug (School of Social Sciences, University of Manchester)

The polling industry has come under considerable strain after the latest erroneous predictions of the US-American presidential election and the Brexit referendum. Election polls need to be implemented within a short framework of typically just a few days, and thus quick and lean survey modes such as online access panels are often preferred. However, these panels generally use nonprobability based techniques to recruit panelists and to select survey participants. Some comparative studies show that the samples of such nonprobability online panels lack representativeness of the general population and lead to less accurate data than traditional probability-based offline surveys. In this light, we assess the data accuracy of probability and nonprobability, as well as online and offline surveys in Germany.

We compare data from one probability online survey split into two samples – one with and one without the offline population –, eight nonprobability online surveys, and two probability face-to-face surveys. As a metric of accuracy, we use the average absolute relative bias (AARB). It measures the average of the absolute relative bias between the survey and the benchmark data, computed over the ordinal or nominal categories of the data. As benchmarks, the German Mikrozensus as well as other official data sources are used.

Our results indicate that the offline surveys provide the most accurate survey data. Moreover, the probability online surveys are more accurate than nonprobability online the surveys. The quotas drawn and weights provided by the nonprobability panels are insufficient to provide accurate samples, while our calibration weighting improves accuracy.

With our research, we enter an urgently needed discussion of the suitability of nonprobability online surveys for election polling and social research. Moreover, this is the first study that assesses data accuracy across different survey modes and sampling techniques in Germany.


3. Experiments in Recruiting the Life in Australia probability-based online panel
Mr Darren Pennay (Social Research Centre, Australian National University)
Dr Paul Lavrakas (Social Research Centre, Australian National University)
Dr Lars Kaczmirek (GESIS, Leibniz Institute for the Social Sciences)
Mr Graham Challice (Social Research Centre, Australian National University)

In Australia in 2014-15, 86 per cent of households had the internet connected (ABS Cat.8146.0). Since 2010, online research has been the dominant mode of data collection in the Australian market and social research industry, supplanting Computer Assisted Telephone Interviewing (CATI). In Australia in 2015 online research accounted for 41 per cent of the revenue generated by the industry up from 31 per cent two years earlier (Research Industry Council of Australia, 2016), with much of this coming from non-probability internet panels. Unlike in the United States and Europe, there are not any national probability based online panels in Australia as of 2016.
The authors of this paper are concerned that the rapid increase in the use of non-probability online panels in Australia has not been accompanied by an informed debate regarding the advantages and disadvantages of probability and non-probability surveys.

Thus, the 2015 Australian Online Panels Benchmarking Study was undertaken to inform this debate and report on the findings from a single national questionnaire administered across three different probability samples and five different non-probability online panels.

This study enables us to investigate whether or not Australian surveys using probability-sampling methods produce different results in terms of accuracy, relative to independent population benchmarks, than Australian online surveys relying upon non-probability sampling methods. In doing so we build on similar international research in this area (e.g. Yeager et al. 2011, Chang & Krosnick 2009, Walker, Pettit & Rubinson, 2009). We discuss our findings as they relate to Coverage error, Non-response error, Adjustment Error, and Measurement Error.


4. Using probability samples to validate Voter Application Advice data
Dr Jill Sheppard (The Australian National University)

This study directly compares survey data on social attitudes collected from an opt-in sample of Voter Advice Application (VAA) users and a randomly recruited, probability-based online panel of respondents. While much research to date has focused on the demographic representativeness of VAA data, less is known about the attitudinal and other representativeness of that data. This study of Australian samples contributes to the emerging literature.

VAAs are proliferating as a source of ‘big data’ among public opinion and political science researchers, despite concerns over the representativeness of the opt-in samples. During July 2016, VAA developer Election Compass collected email address details for approximately Australian 40,000 users of its application in the weeks prior to the 2016 Australian federal election. In November 2016, this study will survey the sample of VAA users on their attitudes towards a range of Australian social issues. In December 2016, I will administer the same questionnaire to a probability-based sample, using an identical mode of administration and similar response maximisation techniques. The questionnaire contains a broad range of questions designed to identify dimensions (using factor analysis) of socio-political attitudes in Australian society. Comparing the composition of dimensions and relationships between variables within the data will contribute to our understanding of incidental samples such as VAA users, and the extent to which we can and should make inferences from VAA-generated data.