ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Push-to-Web Surveys: Challenges and Opportunities 2

Session Organisers Dr Olga Maslovskaya (University of Southampton)
Dr Peter Lugtig (Utrecht University)
TimeWednesday 19 July, 16:00 - 17:30
Room U6-01c

We live in a digital age with widespread use of technologies in everyday life. Technologies change very rapidly and affect all aspects of life, including surveys and their designs. Online data collection is now common in many countries and some surveys started employing push-to-web approach (for example, Generations and Gender Surveys in some countries, including the UK) in which offline contact modes are used to encourage sample members to go online and complete a web questionnaire. This method of data collection is typically used when sampling frames do not include email addresses for all members of the target population. It is important to address different methodological challenges and opportunities which are associated with push-to-web surveys.

This session welcomes submissions of papers on different methodological issues associated with pish-to-web surveys in both cross-sectional and longitudinal contexts. Examples of some topics of interest are:

• Coverage issues
• Data quality issues
• Unit nonresponse
• Response rates and nonresponse bias
• Mobile device use
• Questionnaire length
• Other topics

We encourage papers from researchers with a variety of backgrounds and across different sectors, including academia, national statistics institutes, data collection organisations, and research agencies.

This session aims to foster discussion, knowledge exchange and shared learning among researchers and methodologists around issues related to push-to-web surveys. The format of the session will be designed to encourage interaction and discussion between the presenters and audience.

Keywords: online surveys, push-to-web surveys, nonresponse bias, response rates, data quality

Papers

New Methods for Identifying Fake Respondents in Online Probability-based Surveys

Mrs Katya Kostadintcheva (London School of Economics and Political Science) - Presenting Author
Professor Jouni Kuha (London School of Economics and Political Science)
Professor Patrick Sturgis (London School of Economics and Political Science)

One of the challenges associated with address-based sampling in online probability surveys is the fabrication of non-existent respondents. The provision of incentives introduces the risk of some respondents creating additional fake profiles within the same household in order to claim more incentives. This can lead to data quality issues, respondent-related measurement error and ultimately incorrect survey estimates. Validation checks are applied to detect fake respondents within the same household, however, they can be time-consuming and involve manual checks of information entered by respondents (such as name and email), which is in itself prone to error or falsification.

This research investigates new methods for detecting fake respondents using paradata and response patterns to survey questions, with the aim of streamlining and automating the validation process. It builds on existing research which has so far focused on the detection of data fabrication by face-to-face interviewers and respondents from online (nonprobability) panels, both groups already familiar with the survey process. This paper explores the issue specifically in an online probability-based survey, where respondents are randomly sampled and have likely low prior familiarity with surveys.

The research uses data from UK’s Kantar Public Voice online probability panel, which employs address-based sampling to recruit respondents. Using paradata, survey responses and Kantar’s quality indicators from the panel recruitment surveys, we develop a number of satisficing behaviour indicators to test if data from fake respondents differ from data from real respondents. The novel contribution of this paper is the application of latent variable models to identify likely fake respondents on the basis of distinctive response patterns to substantive items, combined with paradata indicators. We use the developed model in a separate recruitment survey to test how successfully this model can be employed to identify fake respondents.


Is SMS a Viable Recruiting and Distribution Method for Online Surveys?

Ms Trine Dale (TOI - Institute of Transport Economics/Norwegian Centre for Transport Research) - Presenting Author
Ms Ingunn Opheim Ellis (TOI - Institute of Transport Economics/Norwegian Centre for Transport Research)
Ms Ingeborg S. Hesjevold (TOI - Institute of Transport Economics/Norwegian Centre for Transport Research)

With declining response rates and no reliable access to registers for e-mail addresses, it is necessary to investigate other recruiting and distribution methods for online surveys. In Norway, most surveys are conducted online, and SMS has become an alternative to the traditional push-to-web approach for recruiting and providing links to surveys. For respondents it can be less cumbersome, as they get direct access to the survey on their phones. This can contribute to higher response rates in a high-tech society. Norway has a 96% smartphone coverage, and most respondents use smartphones to respond. Using SMS to distribute invitations and links is a logical next step. However, the literature cautions against using SMS to distribute surveys, as this can be considered an invasion of privacy and cause negative reactions and affect the survey climate. People are also becoming more cautious and reluctant to click on links in SMSs due to phishing and fraudulent messages.
To learn more about the respondents’ perceptions of the SMS approach, we will conduct an experiment on survey distribution by using a split sample approach. Half the respondents are invited via SMS and half via e-mail. Survey links will be included in both. The survey itself will measure attitudes towards, and experiences with surveys distributed by SMS, as well as preferences regarding contact method for online surveys. This approach will allow us to test whether response rates, dropout rates, and survey responses vary between methods and demographic groups. We will also investigate whether the SMS message itself affects the willingness to participate, by analysing results from previous studies. The goal is to gain insight into the respondents’ attitudes, experiences, and preferences when it comes to contact modes for surveys, and whether SMS is a viable recruiting and distribution mode.


A Dollar and A Quarter For Your Thoughts?

Ms Robyn Rapoport (SSRS) - Presenting Author
Mr Rob Manley (SSRS)
Mr Jonathan Best (SSRS)

Monetary pre- and post-incentives are frequently used in address-based sample (ABS) push-to-web designs to help induce cooperation and thereby enhance response rates. Considered a best practice, monetary pre-incentives are typically more effective than post-incentives at improving response rates, however post-incentives can be a useful additional lever to improve sample representativeness. Identifying the optimal amount of pre- and post-incentives and to which populations to provide them is an ongoing area of inquiry.
In 2022, SSRS conducted a large, national ABS study in the US that included both a prepaid cash incentive of either $1 or $1.25 in the invitation mailing. For this experiment one third of the sample was assigned to receive $1 cash, while the remaining sample received $1.25. In both groups, the dollar was visible through the envelope address window.
Additionally, a $10 post-incentive was offered to approximately 80% of the sample from both pre-incentive groups who were anticipated to have lower-than-average response propensities, based on previous ABS studies. Specifically, the post-incentive was targeted to sampled addresses in geographic strata with the highest proportions of African Americans, Hispanics, and lower-income households. This design allowed us to evaluate the effectiveness of the $1.25 pre-incentive compared to $1, as well as evaluate the effectiveness of the post-incentive contingent on the amount of the pre-incentive.
Overall, we found that the $1.25 resulted in a better sample yield, however, the operational costs of including the extra quarter are likely offset by the improvement in yield. Additionally, the targeted post-incentive may be a more cost-effective way of improving response and representativeness compared to the extra prepaid quarter. This presentation will compare the results of the two treatments in terms of overall.


Response Rates, Accuracy, and Costs of Push-To-Web Surveys

Ms Andreja Praček (University of Ljubljana) - Presenting Author
Mr Vasja Vehovar (University of Ljubljana)

Probability-based web surveys of general population often suffer from complicated and expensive recruitment procedures. Push-to-web surveys typically recruit by mail. The corresponding research designs have many parameters: Incentive type, incentive value, number of reminders, combination of recruitment modes etc. In this context, we conducted a research experiment with postal mail recruitment to web survey (n = 2200). Participants were divided into five experimental groups. The first group received a conditional incentive in the form of a €10 gift card. The second group received an unconditional incentive in the form of a €5 gift card. The third received a conditional incentive in the form of a €5 gift card. The fourth group received no incentive. The fifth group, on the other hand, received an unconditional incentive of a €5 in cash. The results show a pattern similar to the findings in the literature. However, the added value and focus of our study was the finding that looking at response rates alone is not sufficient to compare different push-to-web designs. Rather, the decision to use a particular design is based on broader considerations that include cost and survey error. For these reasons, we include the calculation of costs and various simulations related to nonresponse bias, which-along with precision (i.e., sampling variance)-is the main component for measuring accuracy (i.e., mean squared error), which is a good approximation of the overall level of survey error. The calculations show that relying only on response rates as a criterion for selecting particular push-to-web designs can be very deceptive. Therefore, accuracy and cost should be considered when deciding on the optimal push-to-web design.


Pushing to web populations less prone to answer: an adaptive design in a French survey on professional careers

Mrs Noémie SOULLIER (Santé publique France) - Presenting Author
Mr Loïc GARRAS (Santé publique France)
Mrs Marie HOUOT (Santé publique France)
Mrs Corinne PILORGET (Santé publique France)

We studied a French survey on professional careers, in which some populations were prioritised into a mixed-mode design, while others were in a web-only design. In the mixed-mode design, a push-to-web approach was used, by calling the web non-respondents to motivate them to answer on the web. In a final step, the non-respondents were offered to answer by phone. The prioritised subgroups were rare populations (such as farmers for example) or subpopulations less prone to answer to the survey (such as young people for example). For these populations, a mixed-mode design was highly effective, with a participation rate of 58% in the mixed-mode design vs 22% in the web design. In particular, the push-to-web approach implied a 18% gain in participation on the web. With this adaptive survey design, participation to the survey was more homogeneous between subgroups: globally, the participation rate was 51% for the prioritised populations vs 43% for other populations. Additionally, with the adaptive survey design, selection bias was lower than with a web survey design, and costs were lower than with a phone-only survey design. Contacting the participants by phone seemed nonetheless necessary, to insure that all types of careers are represented among respondents, in particular those of less qualified people. We proved that using phone survey with parsimony and target specific populations was a good way to optimise the cost-efficiency of our survey.