ESRA logo

ESRA 2025 Preliminary Glance Program


All time references are in CEST

Merits and limits of respondent recruitment through social media

Session Organisers Professor Simon Kühne (Bielefeld University)
Professor Jan Karem Höhne (DZHW, Leibniz University Hannover)
Mrs Jessica Donzowa (Max Planck Institute for Demographic Research)
TimeTuesday 15 July, 09:00 - 10:30
Room Ruppert 002

The demand for high-quality data from surveys – especially from web surveys – is at an all-time peak and continues to grow. At the same time, (web) surveys struggle with low response rates and researchers constantly look for innovative and cost-efficient ways to sample, contact, and interview respondents. One promising approach makes use of social media platforms, such as Facebook, Instagram, and TikTok, for respondent recruitment. Utilizing sophisticated advertisement and target systems offered by these platforms, research has shown that social media recruitment provides an easy, quick, and inexpensive access to an unprecedented and diverse respondent pool (including rare and hard-to-reach populations). Nonetheless, many open questions remain with respect to sample representation, recruitment and advertisement strategies, and data quality and integrity.

In this session, we welcome contributions that are based on empirical studies as well as methodological and theoretical considerations dealing with respondent recruitment through social media platforms. This includes the following research areas (but not limited to):

- Comparing social media platforms for respondent recruitment in terms of costs, quality, and field phase management
- Ad design, invitation wording, and participation metrics
- Strategies for incentivizing respondents and payout
- Targeting designs and sample compositions
- Strategies to cope with user comments and undesired user interactions with ads
- Approaches to deal with missing data in the form of nonresponse and dropouts
- Measurement quality in terms of reliability and validity
- Threat of bots and fake interviews for data integrity
- Comparing social media, nonprobability, and probability-based samples
- Weighting procedures to increase representation and generalizability
- Replications of empirical studies and findings

Keywords: Social media recruitment, Data quality, Data integrity, Representation, Sample comparisons

Papers

Exploring Problematic Response Behaviors in Social Media-Recruited Surveys

Mrs Zaza Zindel (German Centre for Integration and Migration Research) - Presenting Author

Social media recruitment has transformed survey research, offering efficient and cost-effective access to diverse and hard-to-reach populations. However, this method also presents significant challenges, such as satisficing, low-effort participation, and identity misrepresentation-issues that remain understudied in social media-recruited surveys. These behaviors pose substantial threats to data qualityand raise important questions: To what extent do such behaviors occur in social media-recruited surveys, and how do they affect analytical results?

This study examines data from a web survey on labor market discrimination against women wearing headscarves in Germany, conducted in 2021 and repeated in 2024. Recruitment was conducted through targeted Facebook ads designed to reach diverse respondents. Problematic responses were identified using indicators such as item non-response, straight-lining, speeding, and identity misrepresentation. Differences between problematic and non-problematic respondents were analyzed using statistical tests (e.g., chi-square, Mann-Whitney U), and multivariate regression models were used to assess the impact of these behaviors on key outcomes, such as perceived anti-Muslim discrimination.

The results reveal systematic differences between problematic and non-problematic respondents. For example, younger and less educated participants were more likely to engage in behaviors such as speeding and straight-lining, leading to distorted results in multivariate analyses. Data cleaning significantly improved the robustness of the findings, highlighting the critical importance of quality control measures in surveys recruited via Facebook.

This study provides a detailed examination of the prevalence, predictors, and consequences of problematic response behavior in social media-recruited surveys. It offers practical recommendations for survey practitioners, emphasizing the need to incorporate quality assurance strategies to enhance data validity and reliability.


LLM-driven bot infiltration: Protecting web surveys through prompt injections

Professor Jan Karem Höhne (DZHW, Leibniz University Hannover) - Presenting Author
Mr Joshua Claassen (DZHW, Leibniz University Hannover)
Mr Ben Wolf (DZHW, Leibniz University Hannover)

Cost- and time-efficient web surveys have progressively replaced other survey modes. These efficiencies can potentially cover the increasing demand for survey data. However, since web surveys suffer from low response rates, researchers and practitioners consider social media platforms as new recruitment source. Although these platforms provide advertisement and targeting systems, the data quality and integrity of web surveys recruited through social media might be threatened by bots (programs that autonomously interact with systems). Bots have the potential to shift survey outcomes and there is literature on how bots infiltrate social media platforms, distribute fake news, and possibly skew public opinion. However, established strategies to stop and detect bots in web surveys are not reliable. This especially applies to LLM-driven bots that are connected to Large Language Models (LLMs). In this study, we therefore focus on LLM-driven bots and investigate whether prompt injections (instructions to change LLM-driven bot behavior) can help to detect bot infiltration in web surveys. To this end, we utilize two bots that are linked to Google’s LLM Gemini Pro and that have different capabilities (e.g., personas and memory). We instructed our two LLM-driven bots to respond to an open question 800 times. This question included either no injection (control), a jailbreaking injection (instructing the LLM to give a specific response), or a prompt leaking injection (instructing the LLM to reveal its prompt). In a next step, we analyze the synthesized data from our LLM-driven bots evaluating the efficiency of the prompt injections under investigation. By focusing on LLM-driven bots, our study stands out from previous studies that mostly focused on conventional, rule-based bots. The investigation of prompt injections potentially extends the methodological toolkit to protect web surveys recruited through social media platforms from LLM-driven bot infiltration.


Strategies for social media advertising: insights from a non-probability online survey series

Ms Eszter Sandor (Eurofound) - Presenting Author
Mr Michele Consolini (Eurofound)

Eurofound launched a non-probability online survey in March 2020 aiming at measuring the quality of life and work during the early pandemic (Living, working and COVID-19) with a quick turnaround time. The survey was promoted on Meta platforms via a conversion campaign, using COVID-19-related pictures and messages. The survey was repeated four times during the pandemic, and a panel of regular respondents was also established. In spring 2023, a sixth wave was launched, with the pandemic no longer in focus, titled Living and working in the EU. Pixel technology was abandoned due to potential data protection issues, and a traffic campaign was implemented. Instead of pandemic-related imagery, advertisement pictures and messages were tailored to the main contents of the survey. Response to the survey was much lower among social media respondents.

To address this issue, in spring 2024 (wave 7), Eurofound implemented a new advertising strategy, based on advice from previous research on social media advertising. Advertisement sets (assets) were developed for four distinct topics, each with a salient and non-salient image and message combination, as well as a final neutral and general advertisement, so altogether nine assets were launched. Advertisements were placed on the Meta Newsfeed only instead of automatic placement, only static images were used, and ads were placed on an additional platform (LinkedIn).

This presentation compares the sample size and composition achieved in the 7th wave with previous waves, and the number and types of respondents reached by each of the nine advertisements and the two social media platforms.