ESRA 2019 Draft Programme at a Glance

Generating methodological insights from the ESS probability based on-line web panel CRONOS 1

Session Organisers Dr Gianmaria Bottoni (European Social Survey, City University of London )
Professor Rory Fitzgerald (European Social Survey, City University of London )
Dr Elena Sommer (SHARE ERIC)
Dr Ana Villar (Facebook)
TimeTuesday 16th July, 16:00 - 17:00
Room D26

This session is open to anyone who would like to analyse data and experiments from the CROss-National Online Survey panel (, a project led by the European Social Survey, to explore the feasibility of using existing survey infrastructures to recruit probability-based samples for online panels. The CRONOS project capitalised on an existing probability-based face-to-face survey to establish a probability-based sample for an online panel which now comprises data from seven waves, together with experimental data and paradata. All this data is available to any researcher interested in cross-national survey research methodology and it can be downloaded for free from the ESS website.

CRONOS was part of the Horizon 2020 grant Synergies for Europe’s Research Infrastructures in the Social Sciences (SERISS). The panel followed an input-harmonisation approach, with the survey design coordinated by ESS ERIC with crucial continuous support from national teams and other SERISS partners. Using these harmonised protocols, the national teams in Estonia, Great Britain and Slovenia carried out the recruitment of panel members on the back of the European Social Survey (ESS). All ESS sampled units 18 or older were eligible to participate in the panel; after completing the ESS Round 8 (2016) face-to-face interview. Those who agreed were invited to participate in six 20-minute online surveys over a period of 12 months.

Apart from a large amount of substantive data, the CRONOS panel served as a platform for methodological testing. Across the six CRONOS waves, numerous methodological and substantive areas were covered. Methodological efforts included pretesting of new questions, experiments on question wording and satisficing, incentive approaches, contact modes, and other studies. Substantive questions covered various topics related to family, religion and values amongst others from established cross-national studies such as the Generations and Gender Programme (GGP), the European Values Study (EVS) and the International Social Survey Programme (ISSP). The CRONOS data can be analysed together with responses from the ESS face-to-face interview, allowing for new substantive and methodological analyses which was not previously possible.
This session invites papers on methodological findings from the panel. Papers might cover: representativeness, cost analyses, contact mode effects and incentive strategies, effects of device on measurement, efforts to improve survey completion respondent behaviour and the impact of including off-liners through device provision.

Keywords: CRONOS, ESS, online survey panel, methodological experiments, mode effects

Measurement Reliability, Validity, and Quality in the CRONOS

Dr Wiebke Weber (Universitat Pompeu Fabra) - Presenting Author
Mr Marc Asensio (Universitat Pompeu Fabra)

In order to estimate the size of both random and systematic errors, or their complement, the reliability and validity, it is necessary to repeat several times the same question (to measure the "traits") for the same respondents, using different versions of the same questions (called "methods"). This multitrait-multimethod (MTMM) approach has been used a lot in the past decades, and provided a large amount of information about the quality of survey questions for many different topics, countries, languages, and scale formats. This information has proved to be useful to help design new questionnaires and to correct for measurement errors. However, most of the previous research has been done in face-to-face surveys but the mode of data collection may affect the quality of survey questions (because of the visual versus oral stimuli, or the presence of absence of interviewers, etc).

The Cross-National Online Survey (CRONOS) panel implemented two MTMM experiments in its Wave 2 and Wave 6. The experiments compare different methods which vary in number of scale points, labels, direction of labelling (i.e. from positive to negative or negative to positive) and questions which are answered directly or in two-steps (branching). Using the true score model we estimated the reliability, validity, and quality for the different questions and scales.

Does Repeated Survey Taking Negatively Affect Response Quality?

Ms Hannah Schwarz (University Pompeu Fabra (RECSM)) - Presenting Author
Dr Melanie Revilla (University Pompeu Fabra (RECSM))

As the use of (especially online-) surveys becomes ever more widespread, respondents these days are increasingly likely to have previous survey experience. This can decrease the effort they put into answering questions, even more so if respondents are on a panel and are thus taking surveys regularly. Literature on panel conditioning and panel fatigue has investigated whether repeated survey taking indeed affects response quality. Furthermore, literature on frequent respondents suggests extrinsically motivated respondents to deliver lower quality responses. We will use data from the ESS CRONOS panel to investigate whether response quality is affected by survey experience collected before joining the panel and by the type of motivation respondents have for taking the panel surveys. We generally expect previous survey experience to result in respondents investing less effort into survey-taking thus decreasing response quality. On the other hand, we expect intrinsic motivation to lead respondents to invest more effort and therefore provide higher quality responses.

Responding behaviour of CRONOS panellists

Dr Nejc Berzelak (University of Ljubljana, Faculty of Social Sciences) - Presenting Author
Dr Wiebke Weber (Universitat Pompeu Fabra)
Dr Ana Villar (Facebook)

In order to provide accurate answers to survey questions, respondents need to perform the response process with a sufficient cognitive effort. Although previous studies found that probability-based online panels are capable of ensuring high-quality data, a comprehensive elaboration of the CRONOS data quality requires consideration of its specifics. Two key specific aspects include potential measurement quality differences between the three countries (Estonia, Great Britain and Slovenia) and differences between devices used by the panellists to complete web questionnaires. The latter is particularly important due to a significant growth in the use of mobile phones to participate in web surveys over the last years as well as because the CRONOS panellists without internet access were provided tablets and mobile internet connections to be able to participate in the panel.

The paper evaluates measurement quality of the CRONOS data by focusing on the responding behaviour of the panellists. The study scrutinises four key data quality indicators: item nonresponse, non-differentiation, completion times and subjective survey evaluation by respondents. In addition to a descriptive presentation of these indicators and the comparison of their performance across individual CRONOS waves, the relations between the indicators and selected explanatory variables are studied. The explanatory variables include basic socio-demographic characteristics of the panellists, relevant personality variables (such as need for cognition and evaluation), as well as available paradata about the response process and the survey completion environment.

The results of this study provide insights into performance of the CRONOS panel in terms of measurement quality related to the panellists’ behaviour. The findings provide a valuable guidance for similar future endeavours and contribute to improved understanding of how the studied respondent-related factors can affect the quality of survey data.